Feb 12 19:18:16.013803 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 12 19:18:16.013820 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 19:18:16.013828 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 12 19:18:16.013835 kernel: printk: bootconsole [pl11] enabled Feb 12 19:18:16.013840 kernel: efi: EFI v2.70 by EDK II Feb 12 19:18:16.013846 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 12 19:18:16.013852 kernel: random: crng init done Feb 12 19:18:16.013857 kernel: ACPI: Early table checksum verification disabled Feb 12 19:18:16.013863 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 12 19:18:16.013868 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:16.013874 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:16.013880 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:18:16.013885 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:16.013891 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:16.013897 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:16.013903 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:16.013909 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:16.013916 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:16.013922 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 12 19:18:16.013927 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:18:16.013933 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 12 19:18:16.013939 kernel: NUMA: Failed to initialise from firmware Feb 12 19:18:16.013944 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:18:16.013950 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Feb 12 19:18:16.013956 kernel: Zone ranges: Feb 12 19:18:16.013961 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 12 19:18:16.013967 kernel: DMA32 empty Feb 12 19:18:16.013974 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:18:16.013979 kernel: Movable zone start for each node Feb 12 19:18:16.013985 kernel: Early memory node ranges Feb 12 19:18:16.013990 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 12 19:18:16.013996 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 12 19:18:16.014002 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 12 19:18:16.014007 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 12 19:18:16.014013 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 12 19:18:16.014019 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 12 19:18:16.014024 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 12 19:18:16.014030 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 12 19:18:16.014036 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 12 19:18:16.014043 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 12 19:18:16.014051 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 12 19:18:16.014057 kernel: psci: probing for conduit method from ACPI. Feb 12 19:18:16.014063 kernel: psci: PSCIv1.1 detected in firmware. Feb 12 19:18:16.014069 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 19:18:16.014077 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 12 19:18:16.014083 kernel: psci: SMC Calling Convention v1.4 Feb 12 19:18:16.014089 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 12 19:18:16.014095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 12 19:18:16.014101 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 19:18:16.014107 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 19:18:16.014113 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 12 19:18:16.014137 kernel: Detected PIPT I-cache on CPU0 Feb 12 19:18:16.014144 kernel: CPU features: detected: GIC system register CPU interface Feb 12 19:18:16.014150 kernel: CPU features: detected: Hardware dirty bit management Feb 12 19:18:16.014156 kernel: CPU features: detected: Spectre-BHB Feb 12 19:18:16.014162 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 19:18:16.014172 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 19:18:16.014180 kernel: CPU features: detected: ARM erratum 1418040 Feb 12 19:18:16.014186 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 12 19:18:16.014193 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 12 19:18:16.014201 kernel: Policy zone: Normal Feb 12 19:18:16.014211 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:18:16.014218 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:18:16.014226 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:18:16.014233 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:18:16.014240 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:18:16.014248 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 12 19:18:16.014256 kernel: Memory: 3991932K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202228K reserved, 0K cma-reserved) Feb 12 19:18:16.014263 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:18:16.014270 kernel: trace event string verifier disabled Feb 12 19:18:16.014277 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 19:18:16.014284 kernel: rcu: RCU event tracing is enabled. Feb 12 19:18:16.014291 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:18:16.014298 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 19:18:16.014305 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:18:16.014313 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:18:16.014319 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:18:16.014327 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 19:18:16.014333 kernel: GICv3: 960 SPIs implemented Feb 12 19:18:16.014339 kernel: GICv3: 0 Extended SPIs implemented Feb 12 19:18:16.014345 kernel: GICv3: Distributor has no Range Selector support Feb 12 19:18:16.014351 kernel: Root IRQ handler: gic_handle_irq Feb 12 19:18:16.014357 kernel: GICv3: 16 PPIs implemented Feb 12 19:18:16.014363 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 12 19:18:16.014371 kernel: ITS: No ITS available, not enabling LPIs Feb 12 19:18:16.014378 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:18:16.014385 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 12 19:18:16.014392 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 12 19:18:16.014399 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 12 19:18:16.014408 kernel: Console: colour dummy device 80x25 Feb 12 19:18:16.014416 kernel: printk: console [tty1] enabled Feb 12 19:18:16.014423 kernel: ACPI: Core revision 20210730 Feb 12 19:18:16.014430 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 12 19:18:16.014438 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:18:16.014445 kernel: LSM: Security Framework initializing Feb 12 19:18:16.014452 kernel: SELinux: Initializing. Feb 12 19:18:16.014459 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:18:16.014465 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 19:18:16.014473 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 12 19:18:16.014479 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 12 19:18:16.014486 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:18:16.014492 kernel: Remapping and enabling EFI services. Feb 12 19:18:16.014498 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:18:16.014504 kernel: Detected PIPT I-cache on CPU1 Feb 12 19:18:16.014511 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 12 19:18:16.014517 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 12 19:18:16.014523 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 12 19:18:16.014531 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:18:16.014537 kernel: SMP: Total of 2 processors activated. Feb 12 19:18:16.014543 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 19:18:16.014550 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 12 19:18:16.014556 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 12 19:18:16.014563 kernel: CPU features: detected: CRC32 instructions Feb 12 19:18:16.014569 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 12 19:18:16.014575 kernel: CPU features: detected: LSE atomic instructions Feb 12 19:18:16.014581 kernel: CPU features: detected: Privileged Access Never Feb 12 19:18:16.014589 kernel: CPU: All CPU(s) started at EL1 Feb 12 19:18:16.014595 kernel: alternatives: patching kernel code Feb 12 19:18:16.014606 kernel: devtmpfs: initialized Feb 12 19:18:16.014614 kernel: KASLR enabled Feb 12 19:18:16.014620 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:18:16.014627 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:18:16.014634 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:18:16.014640 kernel: SMBIOS 3.1.0 present. Feb 12 19:18:16.014647 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:18:16.014654 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:18:16.014662 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 19:18:16.014668 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 19:18:16.014675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 19:18:16.014681 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:18:16.014688 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Feb 12 19:18:16.014694 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:18:16.014701 kernel: cpuidle: using governor menu Feb 12 19:18:16.014709 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 19:18:16.014716 kernel: ASID allocator initialised with 32768 entries Feb 12 19:18:16.014722 kernel: ACPI: bus type PCI registered Feb 12 19:18:16.014729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:18:16.014735 kernel: Serial: AMBA PL011 UART driver Feb 12 19:18:16.014742 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:18:16.014748 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 19:18:16.014755 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:18:16.014761 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 19:18:16.014769 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:18:16.014776 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 19:18:16.014782 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:18:16.014789 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:18:16.014795 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:18:16.014802 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:18:16.014809 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:18:16.014815 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:18:16.014822 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:18:16.014830 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:18:16.014836 kernel: ACPI: Interpreter enabled Feb 12 19:18:16.014843 kernel: ACPI: Using GIC for interrupt routing Feb 12 19:18:16.014850 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 12 19:18:16.014856 kernel: printk: console [ttyAMA0] enabled Feb 12 19:18:16.014863 kernel: printk: bootconsole [pl11] disabled Feb 12 19:18:16.014870 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 12 19:18:16.014876 kernel: iommu: Default domain type: Translated Feb 12 19:18:16.014883 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 19:18:16.014891 kernel: vgaarb: loaded Feb 12 19:18:16.014897 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:18:16.014904 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:18:16.014911 kernel: PTP clock support registered Feb 12 19:18:16.014917 kernel: Registered efivars operations Feb 12 19:18:16.014924 kernel: No ACPI PMU IRQ for CPU0 Feb 12 19:18:16.014930 kernel: No ACPI PMU IRQ for CPU1 Feb 12 19:18:16.014937 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 19:18:16.014943 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:18:16.014951 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:18:16.014958 kernel: pnp: PnP ACPI init Feb 12 19:18:16.014964 kernel: pnp: PnP ACPI: found 0 devices Feb 12 19:18:16.014971 kernel: NET: Registered PF_INET protocol family Feb 12 19:18:16.014977 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:18:16.014984 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 19:18:16.014991 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:18:16.014997 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:18:16.015004 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 19:18:16.015012 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 19:18:16.015018 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:18:16.015025 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 19:18:16.015032 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:18:16.015038 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:18:16.015045 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 12 19:18:16.015052 kernel: kvm [1]: HYP mode not available Feb 12 19:18:16.015058 kernel: Initialise system trusted keyrings Feb 12 19:18:16.015065 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 19:18:16.015072 kernel: Key type asymmetric registered Feb 12 19:18:16.015079 kernel: Asymmetric key parser 'x509' registered Feb 12 19:18:16.015085 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:18:16.015092 kernel: io scheduler mq-deadline registered Feb 12 19:18:16.015098 kernel: io scheduler kyber registered Feb 12 19:18:16.015105 kernel: io scheduler bfq registered Feb 12 19:18:16.015111 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:18:16.015124 kernel: thunder_xcv, ver 1.0 Feb 12 19:18:16.015131 kernel: thunder_bgx, ver 1.0 Feb 12 19:18:16.015139 kernel: nicpf, ver 1.0 Feb 12 19:18:16.015145 kernel: nicvf, ver 1.0 Feb 12 19:18:16.015263 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 19:18:16.015324 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T19:18:15 UTC (1707765495) Feb 12 19:18:16.015332 kernel: efifb: probing for efifb Feb 12 19:18:16.015339 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:18:16.015346 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:18:16.015353 kernel: efifb: scrolling: redraw Feb 12 19:18:16.015361 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:18:16.015368 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:18:16.015375 kernel: fb0: EFI VGA frame buffer device Feb 12 19:18:16.015381 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 12 19:18:16.015388 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:18:16.015395 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:18:16.015401 kernel: Segment Routing with IPv6 Feb 12 19:18:16.015408 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:18:16.015415 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:18:16.015423 kernel: Key type dns_resolver registered Feb 12 19:18:16.015429 kernel: registered taskstats version 1 Feb 12 19:18:16.015436 kernel: Loading compiled-in X.509 certificates Feb 12 19:18:16.015442 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 19:18:16.015449 kernel: Key type .fscrypt registered Feb 12 19:18:16.015455 kernel: Key type fscrypt-provisioning registered Feb 12 19:18:16.015462 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:18:16.015468 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:18:16.015475 kernel: ima: No architecture policies found Feb 12 19:18:16.015483 kernel: Freeing unused kernel memory: 34688K Feb 12 19:18:16.015489 kernel: Run /init as init process Feb 12 19:18:16.015496 kernel: with arguments: Feb 12 19:18:16.015502 kernel: /init Feb 12 19:18:16.015508 kernel: with environment: Feb 12 19:18:16.015515 kernel: HOME=/ Feb 12 19:18:16.015521 kernel: TERM=linux Feb 12 19:18:16.015527 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:18:16.015536 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:18:16.015546 systemd[1]: Detected virtualization microsoft. Feb 12 19:18:16.015554 systemd[1]: Detected architecture arm64. Feb 12 19:18:16.015560 systemd[1]: Running in initrd. Feb 12 19:18:16.015567 systemd[1]: No hostname configured, using default hostname. Feb 12 19:18:16.015574 systemd[1]: Hostname set to . Feb 12 19:18:16.015581 systemd[1]: Initializing machine ID from random generator. Feb 12 19:18:16.015588 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:18:16.015596 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:18:16.015603 systemd[1]: Reached target cryptsetup.target. Feb 12 19:18:16.015610 systemd[1]: Reached target paths.target. Feb 12 19:18:16.015617 systemd[1]: Reached target slices.target. Feb 12 19:18:16.015624 systemd[1]: Reached target swap.target. Feb 12 19:18:16.015631 systemd[1]: Reached target timers.target. Feb 12 19:18:16.015638 systemd[1]: Listening on iscsid.socket. Feb 12 19:18:16.015645 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:18:16.015654 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:18:16.015661 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:18:16.015668 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:18:16.015675 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:18:16.015682 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:18:16.015689 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:18:16.015696 systemd[1]: Reached target sockets.target. Feb 12 19:18:16.015703 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:18:16.015710 systemd[1]: Finished network-cleanup.service. Feb 12 19:18:16.015718 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:18:16.015725 systemd[1]: Starting systemd-journald.service... Feb 12 19:18:16.015732 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:18:16.015739 systemd[1]: Starting systemd-resolved.service... Feb 12 19:18:16.015751 systemd-journald[276]: Journal started Feb 12 19:18:16.015787 systemd-journald[276]: Runtime Journal (/run/log/journal/72850da3e0244a52ab595269571434f9) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:18:16.004651 systemd-modules-load[277]: Inserted module 'overlay' Feb 12 19:18:16.043136 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:18:16.043169 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:18:16.048054 kernel: Bridge firewalling registered Feb 12 19:18:16.048169 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 12 19:18:16.073034 systemd-resolved[278]: Positive Trust Anchors: Feb 12 19:18:16.080004 systemd[1]: Started systemd-journald.service. Feb 12 19:18:16.080029 kernel: SCSI subsystem initialized Feb 12 19:18:16.073050 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:18:16.138529 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:18:16.138552 kernel: audit: type=1130 audit(1707765496.102:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.138563 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:18:16.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.073077 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:18:16.206164 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:18:16.206193 kernel: audit: type=1130 audit(1707765496.143:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.079284 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 12 19:18:16.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.102821 systemd[1]: Started systemd-resolved.service. Feb 12 19:18:16.171448 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:18:16.261244 kernel: audit: type=1130 audit(1707765496.210:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.261266 kernel: audit: type=1130 audit(1707765496.240:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.205575 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 12 19:18:16.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.232590 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:18:16.311070 kernel: audit: type=1130 audit(1707765496.265:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.311090 kernel: audit: type=1130 audit(1707765496.291:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.241012 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:18:16.265961 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:18:16.292107 systemd[1]: Reached target nss-lookup.target. Feb 12 19:18:16.320817 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:18:16.329334 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:18:16.343105 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:18:16.360482 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:18:16.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.373653 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:18:16.398156 kernel: audit: type=1130 audit(1707765496.373:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.394358 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:18:16.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.420951 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:18:16.450193 kernel: audit: type=1130 audit(1707765496.393:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.450216 kernel: audit: type=1130 audit(1707765496.419:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.455545 dracut-cmdline[299]: dracut-dracut-053 Feb 12 19:18:16.461235 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 19:18:16.524139 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:18:16.535146 kernel: iscsi: registered transport (tcp) Feb 12 19:18:16.555457 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:18:16.555491 kernel: QLogic iSCSI HBA Driver Feb 12 19:18:16.584983 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:18:16.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:16.591025 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:18:16.647137 kernel: raid6: neonx8 gen() 13814 MB/s Feb 12 19:18:16.668126 kernel: raid6: neonx8 xor() 10823 MB/s Feb 12 19:18:16.689130 kernel: raid6: neonx4 gen() 13494 MB/s Feb 12 19:18:16.711128 kernel: raid6: neonx4 xor() 11145 MB/s Feb 12 19:18:16.732126 kernel: raid6: neonx2 gen() 12931 MB/s Feb 12 19:18:16.753130 kernel: raid6: neonx2 xor() 10243 MB/s Feb 12 19:18:16.775127 kernel: raid6: neonx1 gen() 10488 MB/s Feb 12 19:18:16.796126 kernel: raid6: neonx1 xor() 8789 MB/s Feb 12 19:18:16.817126 kernel: raid6: int64x8 gen() 6288 MB/s Feb 12 19:18:16.839127 kernel: raid6: int64x8 xor() 3545 MB/s Feb 12 19:18:16.860126 kernel: raid6: int64x4 gen() 7262 MB/s Feb 12 19:18:16.881126 kernel: raid6: int64x4 xor() 3858 MB/s Feb 12 19:18:16.904126 kernel: raid6: int64x2 gen() 6149 MB/s Feb 12 19:18:16.924126 kernel: raid6: int64x2 xor() 3322 MB/s Feb 12 19:18:16.945127 kernel: raid6: int64x1 gen() 5039 MB/s Feb 12 19:18:16.971178 kernel: raid6: int64x1 xor() 2647 MB/s Feb 12 19:18:16.971188 kernel: raid6: using algorithm neonx8 gen() 13814 MB/s Feb 12 19:18:16.971196 kernel: raid6: .... xor() 10823 MB/s, rmw enabled Feb 12 19:18:16.975791 kernel: raid6: using neon recovery algorithm Feb 12 19:18:16.994129 kernel: xor: measuring software checksum speed Feb 12 19:18:16.999128 kernel: 8regs : 17293 MB/sec Feb 12 19:18:16.999137 kernel: 32regs : 20749 MB/sec Feb 12 19:18:17.007733 kernel: arm64_neon : 27873 MB/sec Feb 12 19:18:17.007744 kernel: xor: using function: arm64_neon (27873 MB/sec) Feb 12 19:18:17.071141 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 19:18:17.080367 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:18:17.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:17.090000 audit: BPF prog-id=7 op=LOAD Feb 12 19:18:17.090000 audit: BPF prog-id=8 op=LOAD Feb 12 19:18:17.091197 systemd[1]: Starting systemd-udevd.service... Feb 12 19:18:17.110143 systemd-udevd[476]: Using default interface naming scheme 'v252'. Feb 12 19:18:17.116687 systemd[1]: Started systemd-udevd.service. Feb 12 19:18:17.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:17.128239 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:18:17.139413 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 12 19:18:17.170729 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:18:17.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:17.177218 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:18:17.221467 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:18:17.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:17.276210 kernel: hv_vmbus: Vmbus version:5.3 Feb 12 19:18:17.295142 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:18:17.295190 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:18:17.301133 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:18:17.301178 kernel: scsi host0: storvsc_host_t Feb 12 19:18:17.309152 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:18:17.309185 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 12 19:18:17.329150 kernel: scsi host1: storvsc_host_t Feb 12 19:18:17.329322 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:18:17.346131 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 12 19:18:17.346177 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:18:17.364867 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:18:17.389297 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 12 19:18:17.389524 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:18:17.391144 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:18:17.407235 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:18:17.407420 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:18:17.412577 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 19:18:17.413159 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:18:17.413304 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:18:17.430140 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:18:17.438991 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 19:18:17.448242 kernel: hv_netvsc 002248b6-78ee-0022-48b6-78ee002248b6 eth0: VF slot 1 added Feb 12 19:18:17.459148 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:18:17.470777 kernel: hv_pci ad8e0c59-0ced-49a7-a1ab-0bf7bedc7708: PCI VMBus probing: Using version 0x10004 Feb 12 19:18:17.470952 kernel: hv_pci ad8e0c59-0ced-49a7-a1ab-0bf7bedc7708: PCI host bridge to bus 0ced:00 Feb 12 19:18:17.486905 kernel: pci_bus 0ced:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 12 19:18:17.487056 kernel: pci_bus 0ced:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:18:17.504655 kernel: pci 0ced:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 12 19:18:17.517112 kernel: pci 0ced:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:18:17.540576 kernel: pci 0ced:00:02.0: enabling Extended Tags Feb 12 19:18:17.563176 kernel: pci 0ced:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0ced:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 12 19:18:17.577593 kernel: pci_bus 0ced:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:18:17.577731 kernel: pci 0ced:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 12 19:18:17.620159 kernel: mlx5_core 0ced:00:02.0: firmware version: 16.30.1284 Feb 12 19:18:17.787143 kernel: mlx5_core 0ced:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 12 19:18:17.849328 kernel: hv_netvsc 002248b6-78ee-0022-48b6-78ee002248b6 eth0: VF registering: eth1 Feb 12 19:18:17.849510 kernel: mlx5_core 0ced:00:02.0 eth1: joined to eth0 Feb 12 19:18:17.863152 kernel: mlx5_core 0ced:00:02.0 enP3309s1: renamed from eth1 Feb 12 19:18:17.942087 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:18:17.996882 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (546) Feb 12 19:18:18.007999 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:18:18.223086 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:18:18.230267 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:18:18.244078 systemd[1]: Starting disk-uuid.service... Feb 12 19:18:18.316992 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:18:19.288137 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:18:19.288434 disk-uuid[598]: The operation has completed successfully. Feb 12 19:18:19.344290 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:18:19.348264 systemd[1]: Finished disk-uuid.service. Feb 12 19:18:19.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.363351 systemd[1]: Starting verity-setup.service... Feb 12 19:18:19.412155 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 19:18:19.593370 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:18:19.605436 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:18:19.609636 systemd[1]: Finished verity-setup.service. Feb 12 19:18:19.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.672134 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:18:19.672244 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:18:19.676836 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:18:19.677603 systemd[1]: Starting ignition-setup.service... Feb 12 19:18:19.685581 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:18:19.728992 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:18:19.729052 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:18:19.729062 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:18:19.757967 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:18:19.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.767000 audit: BPF prog-id=9 op=LOAD Feb 12 19:18:19.768530 systemd[1]: Starting systemd-networkd.service... Feb 12 19:18:19.795011 systemd-networkd[868]: lo: Link UP Feb 12 19:18:19.798849 systemd-networkd[868]: lo: Gained carrier Feb 12 19:18:19.803699 systemd-networkd[868]: Enumeration completed Feb 12 19:18:19.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.804036 systemd[1]: Started systemd-networkd.service. Feb 12 19:18:19.809449 systemd[1]: Reached target network.target. Feb 12 19:18:19.814426 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:18:19.820909 systemd[1]: Starting iscsiuio.service... Feb 12 19:18:19.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.833289 systemd[1]: Started iscsiuio.service. Feb 12 19:18:19.862875 iscsid[875]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:18:19.862875 iscsid[875]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:18:19.862875 iscsid[875]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:18:19.862875 iscsid[875]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:18:19.862875 iscsid[875]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:18:19.862875 iscsid[875]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:18:19.862875 iscsid[875]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:18:19.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.849697 systemd[1]: Starting iscsid.service... Feb 12 19:18:19.859571 systemd[1]: Started iscsid.service. Feb 12 19:18:19.868727 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:18:19.910064 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:18:19.936271 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:18:20.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.936585 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:18:20.048951 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 12 19:18:20.048973 kernel: audit: type=1130 audit(1707765500.008:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:19.948422 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:18:19.963530 systemd[1]: Reached target remote-fs.target. Feb 12 19:18:19.980951 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:18:19.998439 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:18:20.089944 systemd[1]: Finished ignition-setup.service. Feb 12 19:18:20.121727 kernel: mlx5_core 0ced:00:02.0 enP3309s1: Link up Feb 12 19:18:20.121903 kernel: audit: type=1130 audit(1707765500.102:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:20.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:20.122745 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:18:20.139559 kernel: hv_netvsc 002248b6-78ee-0022-48b6-78ee002248b6 eth0: Data path switched to VF: enP3309s1 Feb 12 19:18:20.146195 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:18:20.146511 systemd-networkd[868]: enP3309s1: Link UP Feb 12 19:18:20.146740 systemd-networkd[868]: eth0: Link UP Feb 12 19:18:20.147091 systemd-networkd[868]: eth0: Gained carrier Feb 12 19:18:20.159319 systemd-networkd[868]: enP3309s1: Gained carrier Feb 12 19:18:20.173187 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:18:22.172258 systemd-networkd[868]: eth0: Gained IPv6LL Feb 12 19:18:23.093035 ignition[895]: Ignition 2.14.0 Feb 12 19:18:23.093047 ignition[895]: Stage: fetch-offline Feb 12 19:18:23.093099 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:23.093137 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:23.206386 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:23.206559 ignition[895]: parsed url from cmdline: "" Feb 12 19:18:23.206563 ignition[895]: no config URL provided Feb 12 19:18:23.206568 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:18:23.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.218642 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:18:23.253729 kernel: audit: type=1130 audit(1707765503.224:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.206576 ignition[895]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:18:23.248195 systemd[1]: Starting ignition-fetch.service... Feb 12 19:18:23.206582 ignition[895]: failed to fetch config: resource requires networking Feb 12 19:18:23.206887 ignition[895]: Ignition finished successfully Feb 12 19:18:23.259773 ignition[902]: Ignition 2.14.0 Feb 12 19:18:23.259780 ignition[902]: Stage: fetch Feb 12 19:18:23.259900 ignition[902]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:23.259920 ignition[902]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:23.266626 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:23.270305 ignition[902]: parsed url from cmdline: "" Feb 12 19:18:23.270312 ignition[902]: no config URL provided Feb 12 19:18:23.270320 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:18:23.270341 ignition[902]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:18:23.270379 ignition[902]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:18:23.301741 ignition[902]: GET result: OK Feb 12 19:18:23.301942 ignition[902]: config has been read from IMDS userdata Feb 12 19:18:23.302000 ignition[902]: parsing config with SHA512: b6d01b44af51c41502618771fd3bfbfde65a466f3cf98407f6c08822860e0f2d1325fe9f7bd7ec99d4574166b563cf12077ccc050eb0285eba134fb8b90e30d2 Feb 12 19:18:23.352554 unknown[902]: fetched base config from "system" Feb 12 19:18:23.353256 ignition[902]: fetch: fetch complete Feb 12 19:18:23.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.352564 unknown[902]: fetched base config from "system" Feb 12 19:18:23.391382 kernel: audit: type=1130 audit(1707765503.363:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.353261 ignition[902]: fetch: fetch passed Feb 12 19:18:23.352570 unknown[902]: fetched user config from "azure" Feb 12 19:18:23.353302 ignition[902]: Ignition finished successfully Feb 12 19:18:23.354568 systemd[1]: Finished ignition-fetch.service. Feb 12 19:18:23.399580 ignition[909]: Ignition 2.14.0 Feb 12 19:18:23.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.364697 systemd[1]: Starting ignition-kargs.service... Feb 12 19:18:23.399587 ignition[909]: Stage: kargs Feb 12 19:18:23.484196 kernel: audit: type=1130 audit(1707765503.419:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.484221 kernel: audit: type=1130 audit(1707765503.460:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.409577 systemd[1]: Finished ignition-kargs.service. Feb 12 19:18:23.399711 ignition[909]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:23.442182 systemd[1]: Starting ignition-disks.service... Feb 12 19:18:23.399732 ignition[909]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:23.455761 systemd[1]: Finished ignition-disks.service. Feb 12 19:18:23.402763 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:23.460919 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:18:23.406661 ignition[909]: kargs: kargs passed Feb 12 19:18:23.490100 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:18:23.406719 ignition[909]: Ignition finished successfully Feb 12 19:18:23.501940 systemd[1]: Reached target local-fs.target. Feb 12 19:18:23.449298 ignition[915]: Ignition 2.14.0 Feb 12 19:18:23.515656 systemd[1]: Reached target sysinit.target. Feb 12 19:18:23.449306 ignition[915]: Stage: disks Feb 12 19:18:23.526015 systemd[1]: Reached target basic.target. Feb 12 19:18:23.449423 ignition[915]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:23.536220 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:18:23.449442 ignition[915]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:23.452371 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:23.454852 ignition[915]: disks: disks passed Feb 12 19:18:23.454921 ignition[915]: Ignition finished successfully Feb 12 19:18:23.628847 systemd-fsck[923]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 12 19:18:23.642031 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:18:23.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.670428 systemd[1]: Mounting sysroot.mount... Feb 12 19:18:23.679634 kernel: audit: type=1130 audit(1707765503.647:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:23.696457 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:18:23.692306 systemd[1]: Mounted sysroot.mount. Feb 12 19:18:23.698323 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:18:23.775075 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:18:23.780076 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:18:23.787455 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:18:23.787490 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:18:23.793577 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:18:23.854320 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:18:23.859872 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:18:23.884148 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (934) Feb 12 19:18:23.896288 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:18:23.896337 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:18:23.901154 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:18:23.903302 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:18:23.913954 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:18:23.975939 initrd-setup-root[965]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:18:23.985188 initrd-setup-root[973]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:18:23.993990 initrd-setup-root[981]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:18:24.653941 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:18:24.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:24.681554 systemd[1]: Starting ignition-mount.service... Feb 12 19:18:24.692762 kernel: audit: type=1130 audit(1707765504.658:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:24.692284 systemd[1]: Starting sysroot-boot.service... Feb 12 19:18:24.702187 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:18:24.702293 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:18:24.731190 ignition[1001]: INFO : Ignition 2.14.0 Feb 12 19:18:24.731190 ignition[1001]: INFO : Stage: mount Feb 12 19:18:24.747245 ignition[1001]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:24.747245 ignition[1001]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:24.747245 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:24.747245 ignition[1001]: INFO : mount: mount passed Feb 12 19:18:24.747245 ignition[1001]: INFO : Ignition finished successfully Feb 12 19:18:24.835172 kernel: audit: type=1130 audit(1707765504.749:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:24.835196 kernel: audit: type=1130 audit(1707765504.783:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:24.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:24.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:24.743892 systemd[1]: Finished sysroot-boot.service. Feb 12 19:18:24.776760 systemd[1]: Finished ignition-mount.service. Feb 12 19:18:25.199925 coreos-metadata[933]: Feb 12 19:18:25.199 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:18:25.210443 coreos-metadata[933]: Feb 12 19:18:25.210 INFO Fetch successful Feb 12 19:18:25.244352 coreos-metadata[933]: Feb 12 19:18:25.244 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:18:25.260141 coreos-metadata[933]: Feb 12 19:18:25.260 INFO Fetch successful Feb 12 19:18:25.266566 coreos-metadata[933]: Feb 12 19:18:25.266 INFO wrote hostname ci-3510.3.2-a-f75f2c89dc to /sysroot/etc/hostname Feb 12 19:18:25.276628 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:18:25.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:25.283218 systemd[1]: Starting ignition-files.service... Feb 12 19:18:25.315770 kernel: audit: type=1130 audit(1707765505.282:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:25.314475 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:18:25.334135 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1012) Feb 12 19:18:25.347293 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 12 19:18:25.347307 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:18:25.347316 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:18:25.358645 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:18:25.372988 ignition[1031]: INFO : Ignition 2.14.0 Feb 12 19:18:25.372988 ignition[1031]: INFO : Stage: files Feb 12 19:18:25.382720 ignition[1031]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:25.382720 ignition[1031]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:25.382720 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:25.382720 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:18:25.382720 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:18:25.382720 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:18:25.459138 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:18:25.467301 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:18:25.475256 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:18:25.475256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:18:25.475256 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 12 19:18:25.473272 unknown[1031]: wrote ssh authorized keys file for user: core Feb 12 19:18:25.971260 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:18:26.135974 ignition[1031]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 12 19:18:26.152992 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 12 19:18:26.152992 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:18:26.152992 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 12 19:18:26.440011 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:18:26.662123 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 19:18:26.674545 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:18:26.674545 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 12 19:18:27.075975 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:18:27.334987 ignition[1031]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 12 19:18:27.353060 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 12 19:18:27.353060 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:18:27.353060 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubectl: attempt #1 Feb 12 19:18:27.593098 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:18:27.899615 ignition[1031]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 14be61ec35669a27acf2df0380afb85b9b42311d50ca1165718421c5f605df1119ec9ae314696a674051712e80deeaa65e62d2d62ed4d107fe99d0aaf419dafc Feb 12 19:18:27.916879 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:18:27.916879 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:18:27.916879 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 12 19:18:27.953571 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:18:28.256066 ignition[1031]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 12 19:18:28.272689 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:18:28.272689 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:18:28.272689 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 12 19:18:28.330020 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:18:28.997785 ignition[1031]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 12 19:18:29.014630 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:18:29.014630 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:18:29.014630 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:18:29.014630 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:18:29.014630 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 12 19:18:29.355187 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 19:18:29.435299 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:18:29.445783 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:18:29.590521 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1033) Feb 12 19:18:29.590544 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem454571369" Feb 12 19:18:29.590544 ignition[1031]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem454571369": device or resource busy Feb 12 19:18:29.590544 ignition[1031]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem454571369", trying btrfs: device or resource busy Feb 12 19:18:29.590544 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem454571369" Feb 12 19:18:29.590544 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem454571369" Feb 12 19:18:29.719433 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem454571369" Feb 12 19:18:29.729373 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem454571369" Feb 12 19:18:29.729373 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:18:29.729373 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:18:29.729373 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:18:29.720569 systemd[1]: mnt-oem454571369.mount: Deactivated successfully. Feb 12 19:18:29.780535 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3838107470" Feb 12 19:18:29.780535 ignition[1031]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3838107470": device or resource busy Feb 12 19:18:29.780535 ignition[1031]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3838107470", trying btrfs: device or resource busy Feb 12 19:18:29.780535 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3838107470" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3838107470" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem3838107470" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem3838107470" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 12 19:18:29.780535 ignition[1031]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:18:30.082414 kernel: audit: type=1130 audit(1707765509.785:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.082443 kernel: audit: type=1130 audit(1707765509.889:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.082453 kernel: audit: type=1131 audit(1707765509.907:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.082463 kernel: audit: type=1130 audit(1707765509.948:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.082472 kernel: audit: type=1130 audit(1707765510.050:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.755678 systemd[1]: mnt-oem3838107470.mount: Deactivated successfully. Feb 12 19:18:30.112641 kernel: audit: type=1131 audit(1707765510.074:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(20): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(21): [started] setting preset to enabled for "waagent.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(21): [finished] setting preset to enabled for "waagent.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:18:30.112718 ignition[1031]: INFO : files: files passed Feb 12 19:18:30.112718 ignition[1031]: INFO : Ignition finished successfully Feb 12 19:18:30.407448 kernel: audit: type=1130 audit(1707765510.202:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.407485 kernel: audit: type=1131 audit(1707765510.287:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.775538 systemd[1]: Finished ignition-files.service. Feb 12 19:18:29.815750 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:18:30.426362 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:18:29.836404 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:18:30.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.837318 systemd[1]: Starting ignition-quench.service... Feb 12 19:18:30.477511 kernel: audit: type=1131 audit(1707765510.442:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.870778 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:18:30.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.871568 systemd[1]: Finished ignition-quench.service. Feb 12 19:18:30.512896 kernel: audit: type=1131 audit(1707765510.481:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.933708 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:18:30.537365 kernel: audit: type=1131 audit(1707765510.508:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.979079 systemd[1]: Reached target ignition-complete.target. Feb 12 19:18:30.569295 kernel: audit: type=1131 audit(1707765510.541:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:29.996467 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:18:30.036182 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:18:30.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.036309 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:18:30.601779 kernel: audit: type=1131 audit(1707765510.578:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.075267 systemd[1]: Reached target initrd-fs.target. Feb 12 19:18:30.105922 systemd[1]: Reached target initrd.target. Feb 12 19:18:30.118226 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:18:30.119165 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:18:30.190516 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:18:30.203321 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:18:30.242360 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:18:30.247903 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:18:30.645752 ignition[1069]: INFO : Ignition 2.14.0 Feb 12 19:18:30.645752 ignition[1069]: INFO : Stage: umount Feb 12 19:18:30.645752 ignition[1069]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:18:30.645752 ignition[1069]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:18:30.645752 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:18:30.645752 ignition[1069]: INFO : umount: umount passed Feb 12 19:18:30.645752 ignition[1069]: INFO : Ignition finished successfully Feb 12 19:18:30.797564 kernel: audit: type=1131 audit(1707765510.666:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.797595 kernel: audit: type=1131 audit(1707765510.694:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.797605 kernel: audit: type=1130 audit(1707765510.734:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.797616 kernel: audit: type=1131 audit(1707765510.734:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.262324 systemd[1]: Stopped target timers.target. Feb 12 19:18:30.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.275007 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:18:30.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.275086 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:18:30.287302 systemd[1]: Stopped target initrd.target. Feb 12 19:18:30.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.315838 systemd[1]: Stopped target basic.target. Feb 12 19:18:30.327354 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:18:30.339263 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:18:30.352394 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:18:30.364938 systemd[1]: Stopped target remote-fs.target. Feb 12 19:18:30.378675 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:18:30.393031 systemd[1]: Stopped target sysinit.target. Feb 12 19:18:30.402496 systemd[1]: Stopped target local-fs.target. Feb 12 19:18:30.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.411772 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:18:30.420769 systemd[1]: Stopped target swap.target. Feb 12 19:18:30.430660 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:18:30.430722 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:18:30.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.443019 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:18:30.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.473142 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:18:30.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.939000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:18:30.473195 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:18:30.481957 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:18:30.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.481996 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:18:30.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.508311 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:18:30.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.508357 systemd[1]: Stopped ignition-files.service. Feb 12 19:18:30.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.541981 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:18:30.542039 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:18:31.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.631361 systemd[1]: Stopping ignition-mount.service... Feb 12 19:18:30.646503 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:18:31.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.661725 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:18:31.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.661804 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:18:31.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.667202 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:18:30.667268 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:18:30.695395 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:18:30.695522 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:18:31.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.735287 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:18:31.105790 kernel: hv_netvsc 002248b6-78ee-0022-48b6-78ee002248b6 eth0: Data path switched from VF: enP3309s1 Feb 12 19:18:31.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.735380 systemd[1]: Stopped ignition-mount.service. Feb 12 19:18:31.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.758553 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:18:31.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:31.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.783098 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:18:30.783178 systemd[1]: Stopped ignition-disks.service. Feb 12 19:18:30.794749 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:18:30.794806 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:18:30.802090 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:18:30.802154 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:18:30.810391 systemd[1]: Stopped target network.target. Feb 12 19:18:30.820604 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:18:30.820658 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:18:30.829145 systemd[1]: Stopped target paths.target. Feb 12 19:18:30.837125 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:18:30.845412 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:18:30.850475 systemd[1]: Stopped target slices.target. Feb 12 19:18:30.859217 systemd[1]: Stopped target sockets.target. Feb 12 19:18:30.867713 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:18:31.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:30.867762 systemd[1]: Closed iscsid.socket. Feb 12 19:18:30.875079 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:18:30.875099 systemd[1]: Closed iscsiuio.socket. Feb 12 19:18:30.882654 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:18:30.882694 systemd[1]: Stopped ignition-setup.service. Feb 12 19:18:30.890734 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:18:30.898259 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:18:30.902157 systemd-networkd[868]: eth0: DHCPv6 lease lost Feb 12 19:18:31.258000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:18:30.911264 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:18:31.280758 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). Feb 12 19:18:31.280826 iscsid[875]: iscsid shutting down. Feb 12 19:18:30.911361 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:18:30.922624 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:18:30.922710 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:18:30.931167 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:18:30.931251 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:18:30.940064 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:18:30.940110 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:18:30.949084 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:18:30.949143 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:18:30.958183 systemd[1]: Stopping network-cleanup.service... Feb 12 19:18:30.967794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:18:30.967856 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:18:30.972732 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:18:30.972782 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:18:30.986601 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:18:30.986654 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:18:30.991530 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:18:31.000964 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:18:31.001575 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:18:31.001720 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:18:31.010907 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:18:31.010967 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:18:31.019563 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:18:31.019600 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:18:31.024244 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:18:31.024298 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:18:31.033272 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:18:31.033327 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:18:31.041257 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:18:31.041314 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:18:31.051901 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:18:31.070961 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:18:31.071056 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:18:31.095654 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:18:31.095720 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:18:31.100430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:18:31.100473 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:18:31.111900 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:18:31.112408 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:18:31.112508 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:18:31.194510 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:18:31.194612 systemd[1]: Stopped network-cleanup.service. Feb 12 19:18:31.202482 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:18:31.213938 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:18:31.233441 systemd[1]: Switching root. Feb 12 19:18:31.282047 systemd-journald[276]: Journal stopped Feb 12 19:18:43.770173 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:18:43.770194 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:18:43.770204 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:18:43.770214 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:18:43.770221 kernel: SELinux: policy capability open_perms=1 Feb 12 19:18:43.770229 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:18:43.770238 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:18:43.770246 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:18:43.770254 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:18:43.770262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:18:43.770273 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:18:43.770282 systemd[1]: Successfully loaded SELinux policy in 304.811ms. Feb 12 19:18:43.770292 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.968ms. Feb 12 19:18:43.770302 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:18:43.770313 systemd[1]: Detected virtualization microsoft. Feb 12 19:18:43.770322 systemd[1]: Detected architecture arm64. Feb 12 19:18:43.770331 systemd[1]: Detected first boot. Feb 12 19:18:43.770340 systemd[1]: Hostname set to . Feb 12 19:18:43.770349 systemd[1]: Initializing machine ID from random generator. Feb 12 19:18:43.770358 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:18:43.770366 kernel: kauditd_printk_skb: 32 callbacks suppressed Feb 12 19:18:43.770375 kernel: audit: type=1400 audit(1707765515.927:87): avc: denied { associate } for pid=1102 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:18:43.770387 kernel: audit: type=1300 audit(1707765515.927:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000224ac a1=4000028420 a2=4000026980 a3=32 items=0 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:43.770397 kernel: audit: type=1327 audit(1707765515.927:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:18:43.770406 kernel: audit: type=1400 audit(1707765515.941:88): avc: denied { associate } for pid=1102 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:18:43.770416 kernel: audit: type=1300 audit(1707765515.941:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022585 a2=1ed a3=0 items=2 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:43.770425 kernel: audit: type=1307 audit(1707765515.941:88): cwd="/" Feb 12 19:18:43.770435 kernel: audit: type=1302 audit(1707765515.941:88): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:43.770444 kernel: audit: type=1302 audit(1707765515.941:88): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:43.770453 kernel: audit: type=1327 audit(1707765515.941:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:18:43.770462 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:18:43.770472 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:18:43.770482 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:18:43.770492 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:18:43.770502 kernel: audit: type=1334 audit(1707765522.975:89): prog-id=12 op=LOAD Feb 12 19:18:43.770511 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:18:43.770519 kernel: audit: type=1334 audit(1707765522.975:90): prog-id=3 op=UNLOAD Feb 12 19:18:43.770528 systemd[1]: Stopped iscsiuio.service. Feb 12 19:18:43.770537 kernel: audit: type=1334 audit(1707765522.975:91): prog-id=13 op=LOAD Feb 12 19:18:43.770546 kernel: audit: type=1334 audit(1707765522.975:92): prog-id=14 op=LOAD Feb 12 19:18:43.770556 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:18:43.770567 kernel: audit: type=1334 audit(1707765522.975:93): prog-id=4 op=UNLOAD Feb 12 19:18:43.770575 systemd[1]: Stopped iscsid.service. Feb 12 19:18:43.770584 kernel: audit: type=1334 audit(1707765522.975:94): prog-id=5 op=UNLOAD Feb 12 19:18:43.770593 kernel: audit: type=1334 audit(1707765522.976:95): prog-id=15 op=LOAD Feb 12 19:18:43.770602 kernel: audit: type=1334 audit(1707765522.976:96): prog-id=12 op=UNLOAD Feb 12 19:18:43.770611 kernel: audit: type=1334 audit(1707765522.976:97): prog-id=16 op=LOAD Feb 12 19:18:43.770620 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:18:43.770629 kernel: audit: type=1334 audit(1707765522.976:98): prog-id=17 op=LOAD Feb 12 19:18:43.770639 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:18:43.770648 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:18:43.770658 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:18:43.770668 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:18:43.770678 systemd[1]: Created slice system-getty.slice. Feb 12 19:18:43.770688 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:18:43.770697 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:18:43.770707 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:18:43.770716 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:18:43.770726 systemd[1]: Created slice user.slice. Feb 12 19:18:43.770736 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:18:43.770745 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:18:43.770754 systemd[1]: Set up automount boot.automount. Feb 12 19:18:43.770763 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:18:43.770773 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:18:43.770782 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:18:43.770791 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:18:43.770802 systemd[1]: Reached target integritysetup.target. Feb 12 19:18:43.770811 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:18:43.770821 systemd[1]: Reached target remote-fs.target. Feb 12 19:18:43.770830 systemd[1]: Reached target slices.target. Feb 12 19:18:43.770839 systemd[1]: Reached target swap.target. Feb 12 19:18:43.770848 systemd[1]: Reached target torcx.target. Feb 12 19:18:43.770859 systemd[1]: Reached target veritysetup.target. Feb 12 19:18:43.770869 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:18:43.770878 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:18:43.770888 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:18:43.770897 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:18:43.770907 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:18:43.770916 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:18:43.770926 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:18:43.770936 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:18:43.770946 systemd[1]: Mounting media.mount... Feb 12 19:18:43.770955 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:18:43.770965 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:18:43.770974 systemd[1]: Mounting tmp.mount... Feb 12 19:18:43.770983 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:18:43.770993 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:18:43.771002 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:18:43.771011 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:18:43.771022 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:18:43.771032 systemd[1]: Starting modprobe@drm.service... Feb 12 19:18:43.771041 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:18:43.771050 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:18:43.771060 systemd[1]: Starting modprobe@loop.service... Feb 12 19:18:43.771071 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:18:43.771081 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:18:43.771090 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:18:43.771100 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:18:43.771110 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:18:43.771127 systemd[1]: Stopped systemd-journald.service. Feb 12 19:18:43.771138 kernel: loop: module loaded Feb 12 19:18:43.771147 systemd[1]: systemd-journald.service: Consumed 3.631s CPU time. Feb 12 19:18:43.771156 kernel: fuse: init (API version 7.34) Feb 12 19:18:43.771164 systemd[1]: Starting systemd-journald.service... Feb 12 19:18:43.771174 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:18:43.771183 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:18:43.771195 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:18:43.771204 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:18:43.771213 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:18:43.771223 systemd[1]: Stopped verity-setup.service. Feb 12 19:18:43.771232 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:18:43.771241 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:18:43.771250 systemd[1]: Mounted media.mount. Feb 12 19:18:43.771260 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:18:43.771269 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:18:43.771281 systemd[1]: Mounted tmp.mount. Feb 12 19:18:43.771291 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:18:43.771300 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:18:43.771313 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:18:43.771324 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:18:43.771337 systemd-journald[1208]: Journal started Feb 12 19:18:43.771375 systemd-journald[1208]: Runtime Journal (/run/log/journal/d5ca2612457f4ef093fb17e37ae9b43d) is 8.0M, max 78.6M, 70.6M free. Feb 12 19:18:33.901000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:18:34.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:18:34.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:18:34.621000 audit: BPF prog-id=10 op=LOAD Feb 12 19:18:34.621000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:18:34.621000 audit: BPF prog-id=11 op=LOAD Feb 12 19:18:34.621000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:18:35.927000 audit[1102]: AVC avc: denied { associate } for pid=1102 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:18:35.927000 audit[1102]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000224ac a1=4000028420 a2=4000026980 a3=32 items=0 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:35.927000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:18:35.941000 audit[1102]: AVC avc: denied { associate } for pid=1102 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:18:35.941000 audit[1102]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022585 a2=1ed a3=0 items=2 ppid=1085 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:35.941000 audit: CWD cwd="/" Feb 12 19:18:35.941000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:35.941000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:35.941000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:18:42.975000 audit: BPF prog-id=12 op=LOAD Feb 12 19:18:42.975000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:18:42.975000 audit: BPF prog-id=13 op=LOAD Feb 12 19:18:42.975000 audit: BPF prog-id=14 op=LOAD Feb 12 19:18:42.975000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:18:42.975000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:18:42.976000 audit: BPF prog-id=15 op=LOAD Feb 12 19:18:42.976000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:18:42.976000 audit: BPF prog-id=16 op=LOAD Feb 12 19:18:42.976000 audit: BPF prog-id=17 op=LOAD Feb 12 19:18:42.976000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:18:42.976000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:18:42.977000 audit: BPF prog-id=18 op=LOAD Feb 12 19:18:42.977000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:18:42.977000 audit: BPF prog-id=19 op=LOAD Feb 12 19:18:42.977000 audit: BPF prog-id=20 op=LOAD Feb 12 19:18:42.977000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:18:42.977000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:18:42.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.006000 audit: BPF prog-id=18 op=UNLOAD Feb 12 19:18:43.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.609000 audit: BPF prog-id=21 op=LOAD Feb 12 19:18:43.609000 audit: BPF prog-id=22 op=LOAD Feb 12 19:18:43.609000 audit: BPF prog-id=23 op=LOAD Feb 12 19:18:43.609000 audit: BPF prog-id=19 op=UNLOAD Feb 12 19:18:43.609000 audit: BPF prog-id=20 op=UNLOAD Feb 12 19:18:43.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.767000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:18:43.767000 audit[1208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffedf8c700 a2=4000 a3=1 items=0 ppid=1 pid=1208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:43.767000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:18:42.973857 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:18:35.895372 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:18:43.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:42.977895 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:18:35.895581 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:18:42.979378 systemd[1]: systemd-journald.service: Consumed 3.631s CPU time. Feb 12 19:18:35.895598 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:18:35.895634 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:18:35.895644 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:18:43.777355 systemd[1]: Started systemd-journald.service. Feb 12 19:18:35.895671 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:18:35.895683 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:18:35.895873 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:18:35.895904 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:18:35.895916 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:18:35.913303 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:18:35.913336 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:18:35.913356 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:18:35.913370 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:18:35.913388 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:18:35.913402 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:18:42.016927 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:42Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:18:42.017198 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:42Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:18:42.017296 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:42Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:18:42.017447 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:42Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:18:42.017494 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:42Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:18:42.017549 /usr/lib/systemd/system-generators/torcx-generator[1102]: time="2024-02-12T19:18:42Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:18:43.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.782287 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:18:43.782447 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:18:43.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.787246 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:18:43.787367 systemd[1]: Finished modprobe@drm.service. Feb 12 19:18:43.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.791789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:18:43.791902 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:18:43.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.796992 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:18:43.797110 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:18:43.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.803153 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:18:43.803272 systemd[1]: Finished modprobe@loop.service. Feb 12 19:18:43.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.808148 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:18:43.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.813411 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:18:43.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.819394 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:18:43.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.824371 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:18:43.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.829881 systemd[1]: Reached target network-pre.target. Feb 12 19:18:43.835679 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:18:43.841042 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:18:43.845628 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:18:43.863411 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:18:43.868749 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:18:43.873395 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:18:43.874565 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:18:43.879002 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:18:43.880204 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:18:43.885375 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:18:43.890674 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:18:43.897811 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:18:43.902978 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:18:43.909004 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:18:43.930067 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:18:43.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.935737 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:18:43.945688 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:18:43.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:43.953739 systemd-journald[1208]: Time spent on flushing to /var/log/journal/d5ca2612457f4ef093fb17e37ae9b43d is 14.503ms for 1154 entries. Feb 12 19:18:43.953739 systemd-journald[1208]: System Journal (/var/log/journal/d5ca2612457f4ef093fb17e37ae9b43d) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:18:44.017717 systemd-journald[1208]: Received client request to flush runtime journal. Feb 12 19:18:44.018618 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:18:44.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:44.488851 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:18:44.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:44.494768 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:18:44.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:44.828496 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:18:44.892419 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:18:44.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:44.897000 audit: BPF prog-id=24 op=LOAD Feb 12 19:18:44.897000 audit: BPF prog-id=25 op=LOAD Feb 12 19:18:44.897000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:18:44.897000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:18:44.898412 systemd[1]: Starting systemd-udevd.service... Feb 12 19:18:44.917102 systemd-udevd[1227]: Using default interface naming scheme 'v252'. Feb 12 19:18:45.100189 systemd[1]: Started systemd-udevd.service. Feb 12 19:18:45.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:45.111000 audit: BPF prog-id=26 op=LOAD Feb 12 19:18:45.111881 systemd[1]: Starting systemd-networkd.service... Feb 12 19:18:45.142905 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 12 19:18:45.192207 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:18:45.193924 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:18:45.192000 audit: BPF prog-id=27 op=LOAD Feb 12 19:18:45.193000 audit: BPF prog-id=28 op=LOAD Feb 12 19:18:45.193000 audit: BPF prog-id=29 op=LOAD Feb 12 19:18:45.233778 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:18:45.233859 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:18:45.233875 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:18:45.233889 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:18:45.211000 audit[1231]: AVC avc: denied { confidentiality } for pid=1231 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:18:45.239138 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:18:45.242128 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:18:45.246192 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:18:45.238603 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:18:45.314551 systemd-journald[1208]: Time jumped backwards, rotating. Feb 12 19:18:45.314631 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:18:45.314646 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:18:45.314662 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 12 19:18:45.314673 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:18:45.314686 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:18:45.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:45.211000 audit[1231]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad7103950 a1=aa2c a2=ffff809a24b0 a3=aaaad7063010 items=12 ppid=1227 pid=1231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:45.211000 audit: CWD cwd="/" Feb 12 19:18:45.211000 audit: PATH item=0 name=(null) inode=7197 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=1 name=(null) inode=11411 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=2 name=(null) inode=11411 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=3 name=(null) inode=11412 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=4 name=(null) inode=11411 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=5 name=(null) inode=11413 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=6 name=(null) inode=11411 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=7 name=(null) inode=11414 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=8 name=(null) inode=11411 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=9 name=(null) inode=11415 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=10 name=(null) inode=11411 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PATH item=11 name=(null) inode=11416 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:18:45.211000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:18:45.249167 systemd[1]: Started systemd-userdbd.service. Feb 12 19:18:45.499849 systemd-networkd[1248]: lo: Link UP Feb 12 19:18:45.499861 systemd-networkd[1248]: lo: Gained carrier Feb 12 19:18:45.500240 systemd-networkd[1248]: Enumeration completed Feb 12 19:18:45.500362 systemd[1]: Started systemd-networkd.service. Feb 12 19:18:45.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:45.506153 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:18:45.527900 systemd-networkd[1248]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:18:45.575489 kernel: mlx5_core 0ced:00:02.0 enP3309s1: Link up Feb 12 19:18:45.604575 kernel: hv_netvsc 002248b6-78ee-0022-48b6-78ee002248b6 eth0: Data path switched to VF: enP3309s1 Feb 12 19:18:45.605569 systemd-networkd[1248]: enP3309s1: Link UP Feb 12 19:18:45.605994 systemd-networkd[1248]: eth0: Link UP Feb 12 19:18:45.606070 systemd-networkd[1248]: eth0: Gained carrier Feb 12 19:18:45.610917 systemd-networkd[1248]: enP3309s1: Gained carrier Feb 12 19:18:45.617626 systemd-networkd[1248]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:18:45.633563 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1245) Feb 12 19:18:45.649292 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:18:45.654744 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:18:45.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:45.660839 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:18:45.953861 lvm[1307]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:18:45.999267 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:18:46.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:46.004636 systemd[1]: Reached target cryptsetup.target. Feb 12 19:18:46.010825 systemd[1]: Starting lvm2-activation.service... Feb 12 19:18:46.015249 lvm[1308]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:18:46.041404 systemd[1]: Finished lvm2-activation.service. Feb 12 19:18:46.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:46.046209 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:18:46.050761 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:18:46.050791 systemd[1]: Reached target local-fs.target. Feb 12 19:18:46.055806 systemd[1]: Reached target machines.target. Feb 12 19:18:46.061462 systemd[1]: Starting ldconfig.service... Feb 12 19:18:46.085569 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:18:46.085639 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:18:46.086817 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:18:46.092603 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:18:46.100591 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:18:46.488942 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:18:46.488996 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:18:46.490168 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:18:46.504518 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:18:46.505265 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1310 (bootctl) Feb 12 19:18:46.506508 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:18:46.536296 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:18:46.537393 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:18:46.546577 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:18:46.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:46.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:46.892806 systemd-fsck[1319]: fsck.fat 4.2 (2021-01-31) Feb 12 19:18:46.892806 systemd-fsck[1319]: /dev/sda1: 236 files, 113719/258078 clusters Feb 12 19:18:46.880227 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:18:46.887775 systemd[1]: Mounting boot.mount... Feb 12 19:18:46.897328 systemd[1]: Mounted boot.mount. Feb 12 19:18:46.910679 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:18:46.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:46.937501 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:18:46.938096 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:18:46.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:47.050601 systemd-networkd[1248]: eth0: Gained IPv6LL Feb 12 19:18:47.055347 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:18:47.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.636145 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:18:48.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.651479 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 12 19:18:48.651538 kernel: audit: type=1130 audit(1707765528.642:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.652833 systemd[1]: Starting audit-rules.service... Feb 12 19:18:48.675060 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:18:48.681570 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:18:48.686000 audit: BPF prog-id=30 op=LOAD Feb 12 19:18:48.688956 systemd[1]: Starting systemd-resolved.service... Feb 12 19:18:48.697423 kernel: audit: type=1334 audit(1707765528.686:170): prog-id=30 op=LOAD Feb 12 19:18:48.697000 audit: BPF prog-id=31 op=LOAD Feb 12 19:18:48.700177 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:18:48.708550 kernel: audit: type=1334 audit(1707765528.697:171): prog-id=31 op=LOAD Feb 12 19:18:48.710389 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:18:48.760604 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:18:48.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.765992 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:18:48.784612 kernel: audit: type=1130 audit(1707765528.764:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.786000 audit[1331]: SYSTEM_BOOT pid=1331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.791119 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:18:48.809197 kernel: audit: type=1127 audit(1707765528.786:173): pid=1331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.809311 kernel: audit: type=1130 audit(1707765528.807:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.810113 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:18:48.834354 systemd[1]: Reached target time-set.target. Feb 12 19:18:48.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.857490 kernel: audit: type=1130 audit(1707765528.832:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.866425 systemd-resolved[1329]: Positive Trust Anchors: Feb 12 19:18:48.866730 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:18:48.866810 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:18:48.888721 systemd-resolved[1329]: Using system hostname 'ci-3510.3.2-a-f75f2c89dc'. Feb 12 19:18:48.890234 systemd[1]: Started systemd-resolved.service. Feb 12 19:18:48.895429 systemd[1]: Reached target network.target. Feb 12 19:18:48.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.916868 kernel: audit: type=1130 audit(1707765528.893:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.916561 systemd[1]: Reached target network-online.target. Feb 12 19:18:48.921480 systemd[1]: Reached target nss-lookup.target. Feb 12 19:18:48.926222 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:18:48.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:48.951494 kernel: audit: type=1130 audit(1707765528.930:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:18:49.022189 augenrules[1346]: No rules Feb 12 19:18:49.020000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:18:49.024239 systemd[1]: Finished audit-rules.service. Feb 12 19:18:49.034510 kernel: audit: type=1305 audit(1707765529.020:178): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:18:49.020000 audit[1346]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc17611e0 a2=420 a3=0 items=0 ppid=1325 pid=1346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:18:49.020000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:18:54.396501 ldconfig[1309]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:18:54.429800 systemd[1]: Finished ldconfig.service. Feb 12 19:18:54.436956 systemd[1]: Starting systemd-update-done.service... Feb 12 19:18:54.507917 systemd[1]: Finished systemd-update-done.service. Feb 12 19:18:54.513448 systemd[1]: Reached target sysinit.target. Feb 12 19:18:54.518264 systemd[1]: Started motdgen.path. Feb 12 19:18:54.522627 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:18:54.531004 systemd[1]: Started logrotate.timer. Feb 12 19:18:54.535202 systemd[1]: Started mdadm.timer. Feb 12 19:18:54.539060 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:18:54.543870 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:18:54.543908 systemd[1]: Reached target paths.target. Feb 12 19:18:54.548242 systemd[1]: Reached target timers.target. Feb 12 19:18:54.553506 systemd[1]: Listening on dbus.socket. Feb 12 19:18:54.559494 systemd[1]: Starting docker.socket... Feb 12 19:18:54.565793 systemd[1]: Listening on sshd.socket. Feb 12 19:18:54.570004 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:18:54.570523 systemd[1]: Listening on docker.socket. Feb 12 19:18:54.574855 systemd[1]: Reached target sockets.target. Feb 12 19:18:54.579266 systemd[1]: Reached target basic.target. Feb 12 19:18:54.583563 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:18:54.583589 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:18:54.584755 systemd[1]: Starting containerd.service... Feb 12 19:18:54.589613 systemd[1]: Starting dbus.service... Feb 12 19:18:54.595040 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:18:54.600818 systemd[1]: Starting extend-filesystems.service... Feb 12 19:18:54.608249 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:18:54.609577 systemd[1]: Starting motdgen.service... Feb 12 19:18:54.614628 systemd[1]: Started nvidia.service. Feb 12 19:18:54.620430 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:18:54.627662 systemd[1]: Starting prepare-critools.service... Feb 12 19:18:54.633246 systemd[1]: Starting prepare-helm.service... Feb 12 19:18:54.640546 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:18:54.646751 systemd[1]: Starting sshd-keygen.service... Feb 12 19:18:54.653128 systemd[1]: Starting systemd-logind.service... Feb 12 19:18:54.657686 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:18:54.657756 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:18:54.658226 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:18:54.659044 systemd[1]: Starting update-engine.service... Feb 12 19:18:54.665635 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:18:54.675678 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:18:54.675865 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:18:54.682296 jq[1375]: true Feb 12 19:18:54.683092 jq[1356]: false Feb 12 19:18:54.702180 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:18:54.703212 extend-filesystems[1357]: Found sda Feb 12 19:18:54.703212 extend-filesystems[1357]: Found sda1 Feb 12 19:18:54.703212 extend-filesystems[1357]: Found sda2 Feb 12 19:18:54.703212 extend-filesystems[1357]: Found sda3 Feb 12 19:18:54.703212 extend-filesystems[1357]: Found usr Feb 12 19:18:54.703212 extend-filesystems[1357]: Found sda4 Feb 12 19:18:54.703212 extend-filesystems[1357]: Found sda6 Feb 12 19:18:54.703212 extend-filesystems[1357]: Found sda7 Feb 12 19:18:54.703212 extend-filesystems[1357]: Found sda9 Feb 12 19:18:54.703212 extend-filesystems[1357]: Checking size of /dev/sda9 Feb 12 19:18:54.702367 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:18:54.768401 jq[1387]: true Feb 12 19:18:54.718006 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:18:54.718179 systemd[1]: Finished motdgen.service. Feb 12 19:18:54.787252 env[1381]: time="2024-02-12T19:18:54.783888260Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:18:54.789319 systemd-logind[1370]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:18:54.791091 systemd-logind[1370]: New seat seat0. Feb 12 19:18:54.798806 extend-filesystems[1357]: Old size kept for /dev/sda9 Feb 12 19:18:54.798806 extend-filesystems[1357]: Found sr0 Feb 12 19:18:54.792364 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:18:54.829202 tar[1379]: crictl Feb 12 19:18:54.830584 tar[1380]: linux-arm64/helm Feb 12 19:18:54.831145 tar[1378]: ./ Feb 12 19:18:54.831145 tar[1378]: ./loopback Feb 12 19:18:54.792553 systemd[1]: Finished extend-filesystems.service. Feb 12 19:18:54.884380 env[1381]: time="2024-02-12T19:18:54.884328420Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:18:54.890175 env[1381]: time="2024-02-12T19:18:54.890134140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:18:54.893874 env[1381]: time="2024-02-12T19:18:54.893823460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:18:54.894059 env[1381]: time="2024-02-12T19:18:54.894041740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:18:54.894527 env[1381]: time="2024-02-12T19:18:54.894499500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:18:54.894678 env[1381]: time="2024-02-12T19:18:54.894659100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:18:54.894824 env[1381]: time="2024-02-12T19:18:54.894802860Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:18:54.894898 env[1381]: time="2024-02-12T19:18:54.894883620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:18:54.895103 env[1381]: time="2024-02-12T19:18:54.895084020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:18:54.895641 env[1381]: time="2024-02-12T19:18:54.895618700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:18:54.895958 env[1381]: time="2024-02-12T19:18:54.895933100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:18:54.896390 env[1381]: time="2024-02-12T19:18:54.896367620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:18:54.896638 env[1381]: time="2024-02-12T19:18:54.896552300Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:18:54.896718 env[1381]: time="2024-02-12T19:18:54.896702660Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:18:54.899995 bash[1414]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:18:54.900983 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.912875140Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.912936140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.912966820Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913016180Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913031780Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913046860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913126300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913520420Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913541820Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913556420Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913569980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913583420Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913748300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:18:54.914511 env[1381]: time="2024-02-12T19:18:54.913826180Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914101340Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914137700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914154340Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914204020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914217140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914230260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914241340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914307580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914321140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914332740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914344180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.914869 env[1381]: time="2024-02-12T19:18:54.914358180Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:18:54.915814 env[1381]: time="2024-02-12T19:18:54.915103660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.915814 env[1381]: time="2024-02-12T19:18:54.915128740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.915814 env[1381]: time="2024-02-12T19:18:54.915141700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.915814 env[1381]: time="2024-02-12T19:18:54.915154260Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:18:54.915814 env[1381]: time="2024-02-12T19:18:54.915169100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:18:54.915814 env[1381]: time="2024-02-12T19:18:54.915179740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:18:54.915814 env[1381]: time="2024-02-12T19:18:54.915197500Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:18:54.915814 env[1381]: time="2024-02-12T19:18:54.915232100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:18:54.916019 env[1381]: time="2024-02-12T19:18:54.915425540Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.916326180Z" level=info msg="Connect containerd service" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.916372380Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.917094500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.917391300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.917430220Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.918227780Z" level=info msg="Start subscribing containerd event" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.918274500Z" level=info msg="Start recovering state" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.918332540Z" level=info msg="Start event monitor" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.918351460Z" level=info msg="Start snapshots syncer" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.918360940Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.918368420Z" level=info msg="Start streaming server" Feb 12 19:18:54.932127 env[1381]: time="2024-02-12T19:18:54.924982540Z" level=info msg="containerd successfully booted in 0.150316s" Feb 12 19:18:54.917573 systemd[1]: Started containerd.service. Feb 12 19:18:54.944969 tar[1378]: ./bandwidth Feb 12 19:18:54.994198 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:18:55.033359 dbus-daemon[1355]: [system] SELinux support is enabled Feb 12 19:18:55.033547 systemd[1]: Started dbus.service. Feb 12 19:18:55.040101 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:18:55.040137 systemd[1]: Reached target system-config.target. Feb 12 19:18:55.049601 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:18:55.049627 systemd[1]: Reached target user-config.target. Feb 12 19:18:55.056437 dbus-daemon[1355]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 19:18:55.058598 systemd[1]: Started systemd-logind.service. Feb 12 19:18:55.076428 tar[1378]: ./ptp Feb 12 19:18:55.164216 tar[1378]: ./vlan Feb 12 19:18:55.208211 tar[1378]: ./host-device Feb 12 19:18:55.247074 tar[1378]: ./tuning Feb 12 19:18:55.316389 tar[1378]: ./vrf Feb 12 19:18:55.388339 tar[1378]: ./sbr Feb 12 19:18:55.449018 tar[1378]: ./tap Feb 12 19:18:55.526160 tar[1378]: ./dhcp Feb 12 19:18:55.635554 update_engine[1373]: I0212 19:18:55.620045 1373 main.cc:92] Flatcar Update Engine starting Feb 12 19:18:55.663941 systemd[1]: Finished prepare-critools.service. Feb 12 19:18:55.683826 tar[1378]: ./static Feb 12 19:18:55.701531 tar[1380]: linux-arm64/LICENSE Feb 12 19:18:55.701745 tar[1380]: linux-arm64/README.md Feb 12 19:18:55.707634 systemd[1]: Started update-engine.service. Feb 12 19:18:55.707906 update_engine[1373]: I0212 19:18:55.707671 1373 update_check_scheduler.cc:74] Next update check in 9m45s Feb 12 19:18:55.712394 tar[1378]: ./firewall Feb 12 19:18:55.719805 systemd[1]: Started locksmithd.service. Feb 12 19:18:55.727904 systemd[1]: Finished prepare-helm.service. Feb 12 19:18:55.755028 tar[1378]: ./macvlan Feb 12 19:18:55.788386 tar[1378]: ./dummy Feb 12 19:18:55.821271 tar[1378]: ./bridge Feb 12 19:18:55.857329 tar[1378]: ./ipvlan Feb 12 19:18:55.890311 tar[1378]: ./portmap Feb 12 19:18:55.921612 tar[1378]: ./host-local Feb 12 19:18:56.011717 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:18:57.142779 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:18:57.474102 sshd_keygen[1374]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:18:57.490734 systemd[1]: Finished sshd-keygen.service. Feb 12 19:18:57.497104 systemd[1]: Starting issuegen.service... Feb 12 19:18:57.501999 systemd[1]: Started waagent.service. Feb 12 19:18:57.506545 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:18:57.506718 systemd[1]: Finished issuegen.service. Feb 12 19:18:57.512270 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:18:57.519548 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:18:57.526917 systemd[1]: Started getty@tty1.service. Feb 12 19:18:57.536650 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 12 19:18:57.542240 systemd[1]: Reached target getty.target. Feb 12 19:18:57.546738 systemd[1]: Reached target multi-user.target. Feb 12 19:18:57.552653 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:18:57.561797 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:18:57.561972 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:18:57.567430 systemd[1]: Startup finished in 729ms (kernel) + 17.775s (initrd) + 24.170s (userspace) = 42.675s. Feb 12 19:18:58.154780 login[1482]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 12 19:18:58.156163 login[1483]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:18:58.194198 systemd[1]: Created slice user-500.slice. Feb 12 19:18:58.195260 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:18:58.197525 systemd-logind[1370]: New session 2 of user core. Feb 12 19:18:58.234119 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:18:58.235616 systemd[1]: Starting user@500.service... Feb 12 19:18:58.254423 (systemd)[1486]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:18:58.471290 systemd[1486]: Queued start job for default target default.target. Feb 12 19:18:58.471814 systemd[1486]: Reached target paths.target. Feb 12 19:18:58.471834 systemd[1486]: Reached target sockets.target. Feb 12 19:18:58.471845 systemd[1486]: Reached target timers.target. Feb 12 19:18:58.471856 systemd[1486]: Reached target basic.target. Feb 12 19:18:58.471958 systemd[1]: Started user@500.service. Feb 12 19:18:58.472813 systemd[1]: Started session-2.scope. Feb 12 19:18:58.473702 systemd[1486]: Reached target default.target. Feb 12 19:18:58.473925 systemd[1486]: Startup finished in 213ms. Feb 12 19:18:59.155142 login[1482]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:18:59.158995 systemd-logind[1370]: New session 1 of user core. Feb 12 19:18:59.159390 systemd[1]: Started session-1.scope. Feb 12 19:18:59.376938 systemd-timesyncd[1330]: Timed out waiting for reply from 64.113.44.54:123 (0.flatcar.pool.ntp.org). Feb 12 19:18:59.396364 systemd-timesyncd[1330]: Contacted time server 73.193.62.54:123 (0.flatcar.pool.ntp.org). Feb 12 19:18:59.396422 systemd-timesyncd[1330]: Initial clock synchronization to Mon 2024-02-12 19:18:59.394581 UTC. Feb 12 19:19:04.848087 waagent[1480]: 2024-02-12T19:19:04.847975Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:19:04.855303 waagent[1480]: 2024-02-12T19:19:04.855222Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:19:04.860370 waagent[1480]: 2024-02-12T19:19:04.860298Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:19:04.865143 waagent[1480]: 2024-02-12T19:19:04.865052Z INFO Daemon Daemon Run daemon Feb 12 19:19:04.869573 waagent[1480]: 2024-02-12T19:19:04.869512Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:19:04.888141 waagent[1480]: 2024-02-12T19:19:04.888014Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:19:04.903795 waagent[1480]: 2024-02-12T19:19:04.903667Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:19:04.914461 waagent[1480]: 2024-02-12T19:19:04.914373Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:19:04.922253 waagent[1480]: 2024-02-12T19:19:04.922165Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:19:04.928353 waagent[1480]: 2024-02-12T19:19:04.928283Z INFO Daemon Daemon Activate resource disk Feb 12 19:19:04.933202 waagent[1480]: 2024-02-12T19:19:04.933132Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:19:04.947751 waagent[1480]: 2024-02-12T19:19:04.947668Z INFO Daemon Daemon Found device: None Feb 12 19:19:04.952885 waagent[1480]: 2024-02-12T19:19:04.952811Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:19:04.962172 waagent[1480]: 2024-02-12T19:19:04.962095Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:19:04.976634 waagent[1480]: 2024-02-12T19:19:04.976564Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:19:04.982984 waagent[1480]: 2024-02-12T19:19:04.982911Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:19:04.995938 waagent[1480]: 2024-02-12T19:19:04.995807Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:19:05.011080 waagent[1480]: 2024-02-12T19:19:05.010945Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:19:05.021098 waagent[1480]: 2024-02-12T19:19:05.021016Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:19:05.026554 waagent[1480]: 2024-02-12T19:19:05.026430Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:19:05.106302 waagent[1480]: 2024-02-12T19:19:05.106112Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:19:05.238194 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:19:05.272150 waagent[1480]: 2024-02-12T19:19:05.272000Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:19:05.277176 waagent[1480]: 2024-02-12T19:19:05.277097Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:19:05.283095 waagent[1480]: 2024-02-12T19:19:05.283022Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:19:05.289905 waagent[1480]: 2024-02-12T19:19:05.289838Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:19:05.295373 waagent[1480]: 2024-02-12T19:19:05.295311Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:19:05.300543 waagent[1480]: 2024-02-12T19:19:05.300482Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:19:05.455710 waagent[1480]: 2024-02-12T19:19:05.455594Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:19:05.463290 waagent[1480]: 2024-02-12T19:19:05.463242Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:19:05.468763 waagent[1480]: 2024-02-12T19:19:05.468699Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:19:06.029388 waagent[1480]: 2024-02-12T19:19:06.029248Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:19:06.045979 waagent[1480]: 2024-02-12T19:19:06.045908Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:19:06.051942 waagent[1480]: 2024-02-12T19:19:06.051874Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:19:06.129019 waagent[1480]: 2024-02-12T19:19:06.128893Z INFO Daemon Daemon Found private key matching thumbprint 013833255CEAD4F6F95DC2953C79D76C16C3F3EF Feb 12 19:19:06.137440 waagent[1480]: 2024-02-12T19:19:06.137360Z INFO Daemon Daemon Certificate with thumbprint A036B5042C0989FB3595FBC5D4618DA93A8AFB39 has no matching private key. Feb 12 19:19:06.148782 waagent[1480]: 2024-02-12T19:19:06.148696Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:19:06.201225 waagent[1480]: 2024-02-12T19:19:06.201168Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 8e590933-5054-4da3-a72e-90de3f6d2939 New eTag: 14435004556393045477] Feb 12 19:19:06.213064 waagent[1480]: 2024-02-12T19:19:06.212981Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:19:06.230604 waagent[1480]: 2024-02-12T19:19:06.230526Z INFO Daemon Daemon Starting provisioning Feb 12 19:19:06.236135 waagent[1480]: 2024-02-12T19:19:06.236065Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:19:06.241336 waagent[1480]: 2024-02-12T19:19:06.241274Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-f75f2c89dc] Feb 12 19:19:06.279443 waagent[1480]: 2024-02-12T19:19:06.279281Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-f75f2c89dc] Feb 12 19:19:06.286463 waagent[1480]: 2024-02-12T19:19:06.286377Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:19:06.294298 waagent[1480]: 2024-02-12T19:19:06.294225Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:19:06.310543 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:19:06.310702 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:19:06.310761 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:19:06.311029 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:19:06.315520 systemd-networkd[1248]: eth0: DHCPv6 lease lost Feb 12 19:19:06.317006 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:19:06.317160 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:19:06.319167 systemd[1]: Starting systemd-networkd.service... Feb 12 19:19:06.345419 systemd-networkd[1532]: enP3309s1: Link UP Feb 12 19:19:06.345430 systemd-networkd[1532]: enP3309s1: Gained carrier Feb 12 19:19:06.346387 systemd-networkd[1532]: eth0: Link UP Feb 12 19:19:06.346398 systemd-networkd[1532]: eth0: Gained carrier Feb 12 19:19:06.346936 systemd-networkd[1532]: lo: Link UP Feb 12 19:19:06.346947 systemd-networkd[1532]: lo: Gained carrier Feb 12 19:19:06.347175 systemd-networkd[1532]: eth0: Gained IPv6LL Feb 12 19:19:06.348063 systemd-networkd[1532]: Enumeration completed Feb 12 19:19:06.348166 systemd[1]: Started systemd-networkd.service. Feb 12 19:19:06.349521 systemd-networkd[1532]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:19:06.349856 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:19:06.353422 waagent[1480]: 2024-02-12T19:19:06.353277Z INFO Daemon Daemon Create user account if not exists Feb 12 19:19:06.359670 waagent[1480]: 2024-02-12T19:19:06.359591Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:19:06.365630 waagent[1480]: 2024-02-12T19:19:06.365561Z INFO Daemon Daemon Configure sudoer Feb 12 19:19:06.370585 systemd-networkd[1532]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 12 19:19:06.370858 waagent[1480]: 2024-02-12T19:19:06.370796Z INFO Daemon Daemon Configure sshd Feb 12 19:19:06.375281 waagent[1480]: 2024-02-12T19:19:06.375220Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:19:06.381741 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:19:07.609164 waagent[1480]: 2024-02-12T19:19:07.609097Z INFO Daemon Daemon Provisioning complete Feb 12 19:19:07.632501 waagent[1480]: 2024-02-12T19:19:07.632409Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:19:07.639190 waagent[1480]: 2024-02-12T19:19:07.639114Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:19:07.650414 waagent[1480]: 2024-02-12T19:19:07.650336Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:19:07.947271 waagent[1541]: 2024-02-12T19:19:07.947127Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:19:07.948337 waagent[1541]: 2024-02-12T19:19:07.948283Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:07.948594 waagent[1541]: 2024-02-12T19:19:07.948546Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:07.960878 waagent[1541]: 2024-02-12T19:19:07.960805Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:19:07.961187 waagent[1541]: 2024-02-12T19:19:07.961139Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:19:08.028290 waagent[1541]: 2024-02-12T19:19:08.028157Z INFO ExtHandler ExtHandler Found private key matching thumbprint 013833255CEAD4F6F95DC2953C79D76C16C3F3EF Feb 12 19:19:08.028676 waagent[1541]: 2024-02-12T19:19:08.028624Z INFO ExtHandler ExtHandler Certificate with thumbprint A036B5042C0989FB3595FBC5D4618DA93A8AFB39 has no matching private key. Feb 12 19:19:08.028990 waagent[1541]: 2024-02-12T19:19:08.028943Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:19:08.044640 waagent[1541]: 2024-02-12T19:19:08.044587Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: a9c3a150-b2c3-4fe1-881c-b311468ea023 New eTag: 14435004556393045477] Feb 12 19:19:08.045399 waagent[1541]: 2024-02-12T19:19:08.045343Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:19:08.091267 waagent[1541]: 2024-02-12T19:19:08.091134Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:19:08.116518 waagent[1541]: 2024-02-12T19:19:08.116419Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1541 Feb 12 19:19:08.120372 waagent[1541]: 2024-02-12T19:19:08.120309Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:19:08.121845 waagent[1541]: 2024-02-12T19:19:08.121789Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:19:08.260118 waagent[1541]: 2024-02-12T19:19:08.260011Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:19:08.260683 waagent[1541]: 2024-02-12T19:19:08.260627Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:19:08.268655 waagent[1541]: 2024-02-12T19:19:08.268603Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:19:08.269284 waagent[1541]: 2024-02-12T19:19:08.269232Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:19:08.270560 waagent[1541]: 2024-02-12T19:19:08.270498Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:19:08.271989 waagent[1541]: 2024-02-12T19:19:08.271923Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:19:08.272344 waagent[1541]: 2024-02-12T19:19:08.272273Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:08.272818 waagent[1541]: 2024-02-12T19:19:08.272750Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:08.273383 waagent[1541]: 2024-02-12T19:19:08.273323Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:19:08.273989 waagent[1541]: 2024-02-12T19:19:08.273924Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:19:08.274278 waagent[1541]: 2024-02-12T19:19:08.274212Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:19:08.274278 waagent[1541]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:19:08.274278 waagent[1541]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:19:08.274278 waagent[1541]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:19:08.274278 waagent[1541]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:08.274278 waagent[1541]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:08.274278 waagent[1541]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:08.276556 waagent[1541]: 2024-02-12T19:19:08.276391Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:19:08.277165 waagent[1541]: 2024-02-12T19:19:08.277098Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:08.277441 waagent[1541]: 2024-02-12T19:19:08.277390Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:08.277617 waagent[1541]: 2024-02-12T19:19:08.277547Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:19:08.278764 waagent[1541]: 2024-02-12T19:19:08.278683Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:19:08.279002 waagent[1541]: 2024-02-12T19:19:08.278956Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:19:08.279198 waagent[1541]: 2024-02-12T19:19:08.279155Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:19:08.280221 waagent[1541]: 2024-02-12T19:19:08.280161Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:19:08.280311 waagent[1541]: 2024-02-12T19:19:08.280244Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:19:08.280902 waagent[1541]: 2024-02-12T19:19:08.280830Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:19:08.290691 waagent[1541]: 2024-02-12T19:19:08.290617Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:19:08.293028 waagent[1541]: 2024-02-12T19:19:08.292968Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:19:08.295000 waagent[1541]: 2024-02-12T19:19:08.294936Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:19:08.317725 waagent[1541]: 2024-02-12T19:19:08.317652Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1532' Feb 12 19:19:08.320787 waagent[1541]: 2024-02-12T19:19:08.320730Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:19:08.439831 waagent[1541]: 2024-02-12T19:19:08.439769Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:19:08.653886 waagent[1480]: 2024-02-12T19:19:08.653674Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:19:08.657442 waagent[1480]: 2024-02-12T19:19:08.657387Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:19:09.839517 waagent[1568]: 2024-02-12T19:19:09.839399Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:19:09.840570 waagent[1568]: 2024-02-12T19:19:09.840513Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:19:09.840825 waagent[1568]: 2024-02-12T19:19:09.840778Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:19:09.849276 waagent[1568]: 2024-02-12T19:19:09.849141Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:19:09.849909 waagent[1568]: 2024-02-12T19:19:09.849851Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:09.850162 waagent[1568]: 2024-02-12T19:19:09.850112Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:09.862977 waagent[1568]: 2024-02-12T19:19:09.862902Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:19:09.871913 waagent[1568]: 2024-02-12T19:19:09.871852Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:19:09.873106 waagent[1568]: 2024-02-12T19:19:09.873050Z INFO ExtHandler Feb 12 19:19:09.873333 waagent[1568]: 2024-02-12T19:19:09.873285Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4981e900-b658-4797-bcb6-6eab7ca086a7 eTag: 14435004556393045477 source: Fabric] Feb 12 19:19:09.874194 waagent[1568]: 2024-02-12T19:19:09.874139Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:19:09.875512 waagent[1568]: 2024-02-12T19:19:09.875437Z INFO ExtHandler Feb 12 19:19:09.875733 waagent[1568]: 2024-02-12T19:19:09.875687Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:19:09.881917 waagent[1568]: 2024-02-12T19:19:09.881862Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:19:09.882573 waagent[1568]: 2024-02-12T19:19:09.882525Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:19:09.902643 waagent[1568]: 2024-02-12T19:19:09.902580Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:19:09.985942 waagent[1568]: 2024-02-12T19:19:09.985806Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A036B5042C0989FB3595FBC5D4618DA93A8AFB39', 'hasPrivateKey': False} Feb 12 19:19:09.987185 waagent[1568]: 2024-02-12T19:19:09.987127Z INFO ExtHandler Downloaded certificate {'thumbprint': '013833255CEAD4F6F95DC2953C79D76C16C3F3EF', 'hasPrivateKey': True} Feb 12 19:19:09.988359 waagent[1568]: 2024-02-12T19:19:09.988302Z INFO ExtHandler Fetch goal state completed Feb 12 19:19:10.017216 waagent[1568]: 2024-02-12T19:19:10.017139Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1568 Feb 12 19:19:10.020903 waagent[1568]: 2024-02-12T19:19:10.020835Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:19:10.022512 waagent[1568]: 2024-02-12T19:19:10.022442Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:19:10.027807 waagent[1568]: 2024-02-12T19:19:10.027751Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:19:10.028326 waagent[1568]: 2024-02-12T19:19:10.028270Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:19:10.036063 waagent[1568]: 2024-02-12T19:19:10.036008Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:19:10.036732 waagent[1568]: 2024-02-12T19:19:10.036676Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:19:10.043169 waagent[1568]: 2024-02-12T19:19:10.043056Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 12 19:19:10.047037 waagent[1568]: 2024-02-12T19:19:10.046973Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:19:10.048807 waagent[1568]: 2024-02-12T19:19:10.048735Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:19:10.049049 waagent[1568]: 2024-02-12T19:19:10.048981Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:10.049648 waagent[1568]: 2024-02-12T19:19:10.049579Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:10.050286 waagent[1568]: 2024-02-12T19:19:10.050217Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:19:10.050712 waagent[1568]: 2024-02-12T19:19:10.050642Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:19:10.050712 waagent[1568]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:19:10.050712 waagent[1568]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:19:10.050712 waagent[1568]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:19:10.050712 waagent[1568]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:10.050712 waagent[1568]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:10.050712 waagent[1568]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:19:10.053295 waagent[1568]: 2024-02-12T19:19:10.053182Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:19:10.053701 waagent[1568]: 2024-02-12T19:19:10.053629Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:19:10.053904 waagent[1568]: 2024-02-12T19:19:10.053847Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:19:10.056647 waagent[1568]: 2024-02-12T19:19:10.056477Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:19:10.056881 waagent[1568]: 2024-02-12T19:19:10.056820Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:19:10.056994 waagent[1568]: 2024-02-12T19:19:10.056948Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:19:10.057923 waagent[1568]: 2024-02-12T19:19:10.057852Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:19:10.058145 waagent[1568]: 2024-02-12T19:19:10.058078Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:19:10.062671 waagent[1568]: 2024-02-12T19:19:10.061006Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:19:10.063290 waagent[1568]: 2024-02-12T19:19:10.063107Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:19:10.067110 waagent[1568]: 2024-02-12T19:19:10.067031Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:19:10.067110 waagent[1568]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:19:10.067110 waagent[1568]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:19:10.067110 waagent[1568]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:78:ee brd ff:ff:ff:ff:ff:ff Feb 12 19:19:10.067110 waagent[1568]: 3: enP3309s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:78:ee brd ff:ff:ff:ff:ff:ff\ altname enP3309p0s2 Feb 12 19:19:10.067110 waagent[1568]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:19:10.067110 waagent[1568]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:19:10.067110 waagent[1568]: 2: eth0 inet 10.200.20.4/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:19:10.067110 waagent[1568]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:19:10.067110 waagent[1568]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:19:10.067110 waagent[1568]: 2: eth0 inet6 fe80::222:48ff:feb6:78ee/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:19:10.068636 waagent[1568]: 2024-02-12T19:19:10.068572Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:19:10.086508 waagent[1568]: 2024-02-12T19:19:10.086403Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:19:10.089292 waagent[1568]: 2024-02-12T19:19:10.089228Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:19:10.162760 waagent[1568]: 2024-02-12T19:19:10.162653Z INFO ExtHandler ExtHandler Feb 12 19:19:10.166212 waagent[1568]: 2024-02-12T19:19:10.166082Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 047e9c2c-0141-406d-8335-19ff4291994e correlation 72b79c42-934b-4473-8ebd-4be138f85c16 created: 2024-02-12T19:17:24.883408Z] Feb 12 19:19:10.173951 waagent[1568]: 2024-02-12T19:19:10.173807Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:19:10.176188 waagent[1568]: 2024-02-12T19:19:10.176126Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 13 ms] Feb 12 19:19:10.196997 waagent[1568]: 2024-02-12T19:19:10.196926Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:19:10.212061 waagent[1568]: 2024-02-12T19:19:10.211979Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1ED5F8ED-2AC7-45B8-BF6A-227BDB9AB708;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:19:10.299007 waagent[1568]: 2024-02-12T19:19:10.298868Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 12 19:19:10.299007 waagent[1568]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:10.299007 waagent[1568]: pkts bytes target prot opt in out source destination Feb 12 19:19:10.299007 waagent[1568]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:10.299007 waagent[1568]: pkts bytes target prot opt in out source destination Feb 12 19:19:10.299007 waagent[1568]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:10.299007 waagent[1568]: pkts bytes target prot opt in out source destination Feb 12 19:19:10.299007 waagent[1568]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:19:10.299007 waagent[1568]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:19:10.299007 waagent[1568]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:19:10.306352 waagent[1568]: 2024-02-12T19:19:10.306230Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:19:10.306352 waagent[1568]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:10.306352 waagent[1568]: pkts bytes target prot opt in out source destination Feb 12 19:19:10.306352 waagent[1568]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:10.306352 waagent[1568]: pkts bytes target prot opt in out source destination Feb 12 19:19:10.306352 waagent[1568]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:19:10.306352 waagent[1568]: pkts bytes target prot opt in out source destination Feb 12 19:19:10.306352 waagent[1568]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:19:10.306352 waagent[1568]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:19:10.306352 waagent[1568]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:19:10.306935 waagent[1568]: 2024-02-12T19:19:10.306880Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:19:33.369853 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 12 19:19:40.919232 update_engine[1373]: I0212 19:19:40.918884 1373 update_attempter.cc:509] Updating boot flags... Feb 12 19:20:01.844266 systemd[1]: Created slice system-sshd.slice. Feb 12 19:20:01.845359 systemd[1]: Started sshd@0-10.200.20.4:22-10.200.12.6:32904.service. Feb 12 19:20:02.496552 sshd[1687]: Accepted publickey for core from 10.200.12.6 port 32904 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:02.521333 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:02.525188 systemd-logind[1370]: New session 3 of user core. Feb 12 19:20:02.525949 systemd[1]: Started session-3.scope. Feb 12 19:20:02.887801 systemd[1]: Started sshd@1-10.200.20.4:22-10.200.12.6:32912.service. Feb 12 19:20:03.309062 sshd[1692]: Accepted publickey for core from 10.200.12.6 port 32912 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:03.310606 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:03.314152 systemd-logind[1370]: New session 4 of user core. Feb 12 19:20:03.314594 systemd[1]: Started session-4.scope. Feb 12 19:20:03.615729 sshd[1692]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:03.618078 systemd[1]: sshd@1-10.200.20.4:22-10.200.12.6:32912.service: Deactivated successfully. Feb 12 19:20:03.618769 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:20:03.619291 systemd-logind[1370]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:20:03.620120 systemd-logind[1370]: Removed session 4. Feb 12 19:20:03.687815 systemd[1]: Started sshd@2-10.200.20.4:22-10.200.12.6:32924.service. Feb 12 19:20:04.101827 sshd[1698]: Accepted publickey for core from 10.200.12.6 port 32924 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:04.103340 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:04.107249 systemd[1]: Started session-5.scope. Feb 12 19:20:04.108298 systemd-logind[1370]: New session 5 of user core. Feb 12 19:20:04.401305 sshd[1698]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:04.403561 systemd[1]: sshd@2-10.200.20.4:22-10.200.12.6:32924.service: Deactivated successfully. Feb 12 19:20:04.404211 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:20:04.404730 systemd-logind[1370]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:20:04.405503 systemd-logind[1370]: Removed session 5. Feb 12 19:20:04.470055 systemd[1]: Started sshd@3-10.200.20.4:22-10.200.12.6:32936.service. Feb 12 19:20:04.882123 sshd[1704]: Accepted publickey for core from 10.200.12.6 port 32936 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:04.883328 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:04.887389 systemd[1]: Started session-6.scope. Feb 12 19:20:04.887700 systemd-logind[1370]: New session 6 of user core. Feb 12 19:20:05.183936 sshd[1704]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:05.186594 systemd[1]: sshd@3-10.200.20.4:22-10.200.12.6:32936.service: Deactivated successfully. Feb 12 19:20:05.187245 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:20:05.187770 systemd-logind[1370]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:20:05.188577 systemd-logind[1370]: Removed session 6. Feb 12 19:20:05.253093 systemd[1]: Started sshd@4-10.200.20.4:22-10.200.12.6:32948.service. Feb 12 19:20:05.667349 sshd[1710]: Accepted publickey for core from 10.200.12.6 port 32948 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:20:05.668566 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:20:05.672186 systemd-logind[1370]: New session 7 of user core. Feb 12 19:20:05.672585 systemd[1]: Started session-7.scope. Feb 12 19:20:06.347998 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:20:06.348203 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:20:07.087678 systemd[1]: Starting docker.service... Feb 12 19:20:07.131561 env[1728]: time="2024-02-12T19:20:07.131507427Z" level=info msg="Starting up" Feb 12 19:20:07.132747 env[1728]: time="2024-02-12T19:20:07.132721552Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:20:07.132747 env[1728]: time="2024-02-12T19:20:07.132742322Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:20:07.132856 env[1728]: time="2024-02-12T19:20:07.132760250Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:20:07.132856 env[1728]: time="2024-02-12T19:20:07.132769975Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:20:07.134414 env[1728]: time="2024-02-12T19:20:07.134387128Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:20:07.134414 env[1728]: time="2024-02-12T19:20:07.134409778Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:20:07.134526 env[1728]: time="2024-02-12T19:20:07.134426026Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:20:07.134526 env[1728]: time="2024-02-12T19:20:07.134434950Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:20:07.250845 env[1728]: time="2024-02-12T19:20:07.250801633Z" level=info msg="Loading containers: start." Feb 12 19:20:07.406496 kernel: Initializing XFRM netlink socket Feb 12 19:20:07.429696 env[1728]: time="2024-02-12T19:20:07.429662224Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:20:07.605501 systemd-networkd[1532]: docker0: Link UP Feb 12 19:20:07.624449 env[1728]: time="2024-02-12T19:20:07.624413977Z" level=info msg="Loading containers: done." Feb 12 19:20:07.632816 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4264027500-merged.mount: Deactivated successfully. Feb 12 19:20:07.645774 env[1728]: time="2024-02-12T19:20:07.645730386Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:20:07.646049 env[1728]: time="2024-02-12T19:20:07.645973099Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:20:07.646136 env[1728]: time="2024-02-12T19:20:07.646113805Z" level=info msg="Daemon has completed initialization" Feb 12 19:20:07.677138 systemd[1]: Started docker.service. Feb 12 19:20:07.685389 env[1728]: time="2024-02-12T19:20:07.685268963Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:20:07.700521 systemd[1]: Reloading. Feb 12 19:20:07.756322 /usr/lib/systemd/system-generators/torcx-generator[1861]: time="2024-02-12T19:20:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:20:07.756356 /usr/lib/systemd/system-generators/torcx-generator[1861]: time="2024-02-12T19:20:07Z" level=info msg="torcx already run" Feb 12 19:20:07.831666 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:20:07.831858 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:20:07.849081 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:20:07.927650 systemd[1]: Started kubelet.service. Feb 12 19:20:07.976922 kubelet[1918]: E0212 19:20:07.976802 1918 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:20:07.979283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:20:07.979402 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:20:12.933063 env[1381]: time="2024-02-12T19:20:12.932970165Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 12 19:20:13.797395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2783729214.mount: Deactivated successfully. Feb 12 19:20:17.376475 env[1381]: time="2024-02-12T19:20:17.376409570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:17.384271 env[1381]: time="2024-02-12T19:20:17.384224260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:17.388718 env[1381]: time="2024-02-12T19:20:17.388676398Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:17.393515 env[1381]: time="2024-02-12T19:20:17.393482062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:17.394129 env[1381]: time="2024-02-12T19:20:17.394098040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:d19178cf7413f0942a116deaaea447983d297afb5dc7f62456c43839e7aaecfa\"" Feb 12 19:20:17.402820 env[1381]: time="2024-02-12T19:20:17.402712533Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 12 19:20:18.053582 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:20:18.053765 systemd[1]: Stopped kubelet.service. Feb 12 19:20:18.055184 systemd[1]: Started kubelet.service. Feb 12 19:20:18.108208 kubelet[1941]: E0212 19:20:18.108167 1941 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:20:18.111209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:20:18.111345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:20:20.362991 env[1381]: time="2024-02-12T19:20:20.362945555Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:20.375541 env[1381]: time="2024-02-12T19:20:20.375504906Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:20.381157 env[1381]: time="2024-02-12T19:20:20.381111702Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:20.385894 env[1381]: time="2024-02-12T19:20:20.385849773Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:20.386677 env[1381]: time="2024-02-12T19:20:20.386646154Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:6b9759f115be4c68b4a500b8c1d7bbeaf16e8e887b01eaf79c135b7b267baf95\"" Feb 12 19:20:20.395998 env[1381]: time="2024-02-12T19:20:20.395961843Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 12 19:20:21.996936 env[1381]: time="2024-02-12T19:20:21.996890281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:22.007307 env[1381]: time="2024-02-12T19:20:22.007270304Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:22.015096 env[1381]: time="2024-02-12T19:20:22.015050041Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:22.020416 env[1381]: time="2024-02-12T19:20:22.020371134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:22.021000 env[1381]: time="2024-02-12T19:20:22.020972681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:745369ed75bfc0dd1319e4c64383b4ef2cb163cec6630fa288ad3fb6bf6624eb\"" Feb 12 19:20:22.029766 env[1381]: time="2024-02-12T19:20:22.029734603Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 19:20:23.104121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount230500280.mount: Deactivated successfully. Feb 12 19:20:24.029420 env[1381]: time="2024-02-12T19:20:24.029366691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:24.035001 env[1381]: time="2024-02-12T19:20:24.034960621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:24.040095 env[1381]: time="2024-02-12T19:20:24.040055244Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:24.043078 env[1381]: time="2024-02-12T19:20:24.043042566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:24.043371 env[1381]: time="2024-02-12T19:20:24.043339693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 12 19:20:24.052151 env[1381]: time="2024-02-12T19:20:24.052114162Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:20:24.695170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3643129482.mount: Deactivated successfully. Feb 12 19:20:24.725509 env[1381]: time="2024-02-12T19:20:24.725454588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:24.733522 env[1381]: time="2024-02-12T19:20:24.733482917Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:24.739209 env[1381]: time="2024-02-12T19:20:24.739172876Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:24.744439 env[1381]: time="2024-02-12T19:20:24.744396977Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:24.745263 env[1381]: time="2024-02-12T19:20:24.745233824Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 12 19:20:24.755006 env[1381]: time="2024-02-12T19:20:24.754970577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 12 19:20:25.793804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780081873.mount: Deactivated successfully. Feb 12 19:20:28.303594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 19:20:28.303763 systemd[1]: Stopped kubelet.service. Feb 12 19:20:28.305135 systemd[1]: Started kubelet.service. Feb 12 19:20:28.344114 kubelet[1968]: E0212 19:20:28.344071 1968 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:20:28.346526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:20:28.346652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:20:28.609255 env[1381]: time="2024-02-12T19:20:28.609147923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:28.617706 env[1381]: time="2024-02-12T19:20:28.617664193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:28.622118 env[1381]: time="2024-02-12T19:20:28.622082851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:28.627516 env[1381]: time="2024-02-12T19:20:28.627462485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:28.627802 env[1381]: time="2024-02-12T19:20:28.627768207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737\"" Feb 12 19:20:28.636041 env[1381]: time="2024-02-12T19:20:28.636005763Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 19:20:29.387039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774638669.mount: Deactivated successfully. Feb 12 19:20:31.188367 env[1381]: time="2024-02-12T19:20:31.188320694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:31.204386 env[1381]: time="2024-02-12T19:20:31.204348301Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:31.213562 env[1381]: time="2024-02-12T19:20:31.213528733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:31.239201 env[1381]: time="2024-02-12T19:20:31.239158796Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:31.239635 env[1381]: time="2024-02-12T19:20:31.239601145Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 12 19:20:37.529760 systemd[1]: Stopped kubelet.service. Feb 12 19:20:37.546023 systemd[1]: Reloading. Feb 12 19:20:37.645193 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2024-02-12T19:20:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:20:37.647535 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2024-02-12T19:20:37Z" level=info msg="torcx already run" Feb 12 19:20:37.703195 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:20:37.703216 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:20:37.720357 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:20:37.816939 systemd[1]: Started kubelet.service. Feb 12 19:20:37.865414 kubelet[2124]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:20:37.865414 kubelet[2124]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:20:37.865414 kubelet[2124]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:20:37.865780 kubelet[2124]: I0212 19:20:37.865485 2124 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:20:38.550727 kubelet[2124]: I0212 19:20:38.550698 2124 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:20:38.550877 kubelet[2124]: I0212 19:20:38.550867 2124 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:20:38.551143 kubelet[2124]: I0212 19:20:38.551130 2124 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:20:38.556527 kubelet[2124]: E0212 19:20:38.556462 2124 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:38.556527 kubelet[2124]: W0212 19:20:38.556497 2124 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:20:38.556706 kubelet[2124]: I0212 19:20:38.556692 2124 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:20:38.557044 kubelet[2124]: I0212 19:20:38.557012 2124 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:20:38.557214 kubelet[2124]: I0212 19:20:38.557200 2124 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:20:38.557290 kubelet[2124]: I0212 19:20:38.557273 2124 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:20:38.557373 kubelet[2124]: I0212 19:20:38.557295 2124 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:20:38.557373 kubelet[2124]: I0212 19:20:38.557305 2124 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:20:38.557429 kubelet[2124]: I0212 19:20:38.557404 2124 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:20:38.561441 kubelet[2124]: I0212 19:20:38.561413 2124 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:20:38.561441 kubelet[2124]: I0212 19:20:38.561443 2124 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:20:38.561589 kubelet[2124]: I0212 19:20:38.561484 2124 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:20:38.561589 kubelet[2124]: I0212 19:20:38.561499 2124 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:20:38.562944 kubelet[2124]: I0212 19:20:38.562920 2124 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:20:38.563192 kubelet[2124]: W0212 19:20:38.563164 2124 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:20:38.563557 kubelet[2124]: I0212 19:20:38.563534 2124 server.go:1168] "Started kubelet" Feb 12 19:20:38.563658 kubelet[2124]: W0212 19:20:38.563636 2124 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-f75f2c89dc&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:38.563702 kubelet[2124]: E0212 19:20:38.563667 2124 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-f75f2c89dc&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:38.563744 kubelet[2124]: W0212 19:20:38.563710 2124 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:38.563780 kubelet[2124]: E0212 19:20:38.563747 2124 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:38.564737 kubelet[2124]: I0212 19:20:38.564707 2124 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:20:38.565270 kubelet[2124]: I0212 19:20:38.565229 2124 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:20:38.566202 kubelet[2124]: I0212 19:20:38.566159 2124 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:20:38.567209 kubelet[2124]: E0212 19:20:38.567192 2124 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:20:38.567312 kubelet[2124]: E0212 19:20:38.567302 2124 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:20:38.567676 kubelet[2124]: E0212 19:20:38.567600 2124 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-f75f2c89dc.17b333ccd6324cd8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-f75f2c89dc", UID:"ci-3510.3.2-a-f75f2c89dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-f75f2c89dc"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 20, 38, 563515608, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 20, 38, 563515608, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.4:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.4:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:20:38.573740 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:20:38.573897 kubelet[2124]: I0212 19:20:38.573872 2124 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:20:38.575571 kubelet[2124]: I0212 19:20:38.575123 2124 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:20:38.575571 kubelet[2124]: I0212 19:20:38.575229 2124 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:20:38.575821 kubelet[2124]: W0212 19:20:38.575771 2124 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:38.575879 kubelet[2124]: E0212 19:20:38.575824 2124 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:38.576498 kubelet[2124]: E0212 19:20:38.576175 2124 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f75f2c89dc?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="200ms" Feb 12 19:20:38.660583 kubelet[2124]: I0212 19:20:38.660542 2124 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:20:38.661953 kubelet[2124]: I0212 19:20:38.661935 2124 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:20:38.662070 kubelet[2124]: I0212 19:20:38.662060 2124 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:20:38.662137 kubelet[2124]: I0212 19:20:38.662128 2124 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:20:38.662235 kubelet[2124]: E0212 19:20:38.662225 2124 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:20:38.662900 kubelet[2124]: W0212 19:20:38.662869 2124 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:38.663157 kubelet[2124]: E0212 19:20:38.663126 2124 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:38.676551 kubelet[2124]: I0212 19:20:38.676529 2124 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.676922 kubelet[2124]: I0212 19:20:38.676528 2124 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:20:38.677086 kubelet[2124]: I0212 19:20:38.677077 2124 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:20:38.677167 kubelet[2124]: I0212 19:20:38.677159 2124 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:20:38.677412 kubelet[2124]: E0212 19:20:38.677059 2124 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.684685 kubelet[2124]: I0212 19:20:38.684663 2124 policy_none.go:49] "None policy: Start" Feb 12 19:20:38.685568 kubelet[2124]: I0212 19:20:38.685552 2124 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:20:38.685683 kubelet[2124]: I0212 19:20:38.685673 2124 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:20:38.694066 systemd[1]: Created slice kubepods.slice. Feb 12 19:20:38.698207 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:20:38.707419 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:20:38.709255 kubelet[2124]: I0212 19:20:38.709231 2124 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:20:38.710177 kubelet[2124]: I0212 19:20:38.710142 2124 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:20:38.710759 kubelet[2124]: E0212 19:20:38.710720 2124 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-f75f2c89dc\" not found" Feb 12 19:20:38.763196 kubelet[2124]: I0212 19:20:38.763161 2124 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:20:38.764791 kubelet[2124]: I0212 19:20:38.764769 2124 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:20:38.766010 kubelet[2124]: I0212 19:20:38.765988 2124 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:20:38.772532 systemd[1]: Created slice kubepods-burstable-pod5a4b237f12c69cbf4b1a94e3995226c3.slice. Feb 12 19:20:38.776788 kubelet[2124]: E0212 19:20:38.776761 2124 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f75f2c89dc?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="400ms" Feb 12 19:20:38.784035 systemd[1]: Created slice kubepods-burstable-pod222a629b26639d91ba66f1699403f812.slice. Feb 12 19:20:38.788132 systemd[1]: Created slice kubepods-burstable-podd1c4fb393a269fbe45970811629edabd.slice. Feb 12 19:20:38.877061 kubelet[2124]: I0212 19:20:38.877034 2124 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.877424 kubelet[2124]: I0212 19:20:38.877409 2124 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.877558 kubelet[2124]: I0212 19:20:38.877545 2124 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/222a629b26639d91ba66f1699403f812-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-f75f2c89dc\" (UID: \"222a629b26639d91ba66f1699403f812\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.877940 kubelet[2124]: I0212 19:20:38.877924 2124 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.878092 kubelet[2124]: I0212 19:20:38.878079 2124 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.878173 kubelet[2124]: I0212 19:20:38.878163 2124 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.878255 kubelet[2124]: I0212 19:20:38.878245 2124 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1c4fb393a269fbe45970811629edabd-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-f75f2c89dc\" (UID: \"d1c4fb393a269fbe45970811629edabd\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.878334 kubelet[2124]: I0212 19:20:38.878324 2124 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/222a629b26639d91ba66f1699403f812-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-f75f2c89dc\" (UID: \"222a629b26639d91ba66f1699403f812\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.878410 kubelet[2124]: I0212 19:20:38.878401 2124 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/222a629b26639d91ba66f1699403f812-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-f75f2c89dc\" (UID: \"222a629b26639d91ba66f1699403f812\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.878851 kubelet[2124]: I0212 19:20:38.878831 2124 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:38.879134 kubelet[2124]: E0212 19:20:38.879114 2124 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:39.083843 env[1381]: time="2024-02-12T19:20:39.083792834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-f75f2c89dc,Uid:5a4b237f12c69cbf4b1a94e3995226c3,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:39.087909 env[1381]: time="2024-02-12T19:20:39.087862787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-f75f2c89dc,Uid:222a629b26639d91ba66f1699403f812,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:39.090851 env[1381]: time="2024-02-12T19:20:39.090729573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-f75f2c89dc,Uid:d1c4fb393a269fbe45970811629edabd,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:39.177731 kubelet[2124]: E0212 19:20:39.177632 2124 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f75f2c89dc?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="800ms" Feb 12 19:20:39.281004 kubelet[2124]: I0212 19:20:39.280971 2124 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:39.281350 kubelet[2124]: E0212 19:20:39.281328 2124 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:39.364844 kubelet[2124]: W0212 19:20:39.364784 2124 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-f75f2c89dc&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:39.364844 kubelet[2124]: E0212 19:20:39.364848 2124 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-f75f2c89dc&limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:39.451914 kubelet[2124]: W0212 19:20:39.451795 2124 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:39.451914 kubelet[2124]: E0212 19:20:39.451859 2124 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:39.737780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount988127217.mount: Deactivated successfully. Feb 12 19:20:39.779334 env[1381]: time="2024-02-12T19:20:39.779292102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.783881 env[1381]: time="2024-02-12T19:20:39.783848314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.799119 env[1381]: time="2024-02-12T19:20:39.799073829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.802832 env[1381]: time="2024-02-12T19:20:39.802800272Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.809695 env[1381]: time="2024-02-12T19:20:39.809660075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.817663 env[1381]: time="2024-02-12T19:20:39.817619144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.823057 env[1381]: time="2024-02-12T19:20:39.823013207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.828664 env[1381]: time="2024-02-12T19:20:39.828630637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.836581 env[1381]: time="2024-02-12T19:20:39.836542656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.847422 env[1381]: time="2024-02-12T19:20:39.847382114Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.855788 env[1381]: time="2024-02-12T19:20:39.855742344Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.869176 env[1381]: time="2024-02-12T19:20:39.869135005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:20:39.937148 env[1381]: time="2024-02-12T19:20:39.932682247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:20:39.937148 env[1381]: time="2024-02-12T19:20:39.932717974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:20:39.937148 env[1381]: time="2024-02-12T19:20:39.932741139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:20:39.937148 env[1381]: time="2024-02-12T19:20:39.932861284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b69b5284cb84b64ff7309c9ebe843d80585513a0a94f451b590fb34b5bbd2c0 pid=2169 runtime=io.containerd.runc.v2 Feb 12 19:20:39.937548 env[1381]: time="2024-02-12T19:20:39.933628761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:20:39.937548 env[1381]: time="2024-02-12T19:20:39.933661127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:20:39.937548 env[1381]: time="2024-02-12T19:20:39.933670489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:20:39.937548 env[1381]: time="2024-02-12T19:20:39.933866529Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae30f5792e278c624c5ffff7899fec92515c677f1c4b8c1680bece8dce28a240 pid=2175 runtime=io.containerd.runc.v2 Feb 12 19:20:39.958128 systemd[1]: Started cri-containerd-3b69b5284cb84b64ff7309c9ebe843d80585513a0a94f451b590fb34b5bbd2c0.scope. Feb 12 19:20:39.966992 env[1381]: time="2024-02-12T19:20:39.966916252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:20:39.967195 env[1381]: time="2024-02-12T19:20:39.967171584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:20:39.967291 env[1381]: time="2024-02-12T19:20:39.967269644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:20:39.967531 env[1381]: time="2024-02-12T19:20:39.967501532Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1db6740f7ba42644ffc0b48d33bfabc6e56ffa9b23f5fac200d01e2d78e4be07 pid=2215 runtime=io.containerd.runc.v2 Feb 12 19:20:39.974845 systemd[1]: Started cri-containerd-ae30f5792e278c624c5ffff7899fec92515c677f1c4b8c1680bece8dce28a240.scope. Feb 12 19:20:39.978692 kubelet[2124]: E0212 19:20:39.978656 2124 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f75f2c89dc?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="1.6s" Feb 12 19:20:39.997280 systemd[1]: Started cri-containerd-1db6740f7ba42644ffc0b48d33bfabc6e56ffa9b23f5fac200d01e2d78e4be07.scope. Feb 12 19:20:40.005850 env[1381]: time="2024-02-12T19:20:40.005785740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-f75f2c89dc,Uid:5a4b237f12c69cbf4b1a94e3995226c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b69b5284cb84b64ff7309c9ebe843d80585513a0a94f451b590fb34b5bbd2c0\"" Feb 12 19:20:40.009914 env[1381]: time="2024-02-12T19:20:40.009869397Z" level=info msg="CreateContainer within sandbox \"3b69b5284cb84b64ff7309c9ebe843d80585513a0a94f451b590fb34b5bbd2c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:20:40.047913 env[1381]: time="2024-02-12T19:20:40.047864875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-f75f2c89dc,Uid:222a629b26639d91ba66f1699403f812,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae30f5792e278c624c5ffff7899fec92515c677f1c4b8c1680bece8dce28a240\"" Feb 12 19:20:40.050751 env[1381]: time="2024-02-12T19:20:40.050718886Z" level=info msg="CreateContainer within sandbox \"ae30f5792e278c624c5ffff7899fec92515c677f1c4b8c1680bece8dce28a240\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:20:40.054526 env[1381]: time="2024-02-12T19:20:40.054486359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-f75f2c89dc,Uid:d1c4fb393a269fbe45970811629edabd,Namespace:kube-system,Attempt:0,} returns sandbox id \"1db6740f7ba42644ffc0b48d33bfabc6e56ffa9b23f5fac200d01e2d78e4be07\"" Feb 12 19:20:40.056829 env[1381]: time="2024-02-12T19:20:40.056801542Z" level=info msg="CreateContainer within sandbox \"1db6740f7ba42644ffc0b48d33bfabc6e56ffa9b23f5fac200d01e2d78e4be07\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:20:40.062358 env[1381]: time="2024-02-12T19:20:40.062319126Z" level=info msg="CreateContainer within sandbox \"3b69b5284cb84b64ff7309c9ebe843d80585513a0a94f451b590fb34b5bbd2c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978\"" Feb 12 19:20:40.062876 kubelet[2124]: W0212 19:20:40.062782 2124 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:40.062876 kubelet[2124]: E0212 19:20:40.062854 2124 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:40.063502 env[1381]: time="2024-02-12T19:20:40.063441430Z" level=info msg="StartContainer for \"3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978\"" Feb 12 19:20:40.079178 systemd[1]: Started cri-containerd-3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978.scope. Feb 12 19:20:40.083256 kubelet[2124]: I0212 19:20:40.082944 2124 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:40.083508 kubelet[2124]: E0212 19:20:40.083446 2124 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:40.116326 env[1381]: time="2024-02-12T19:20:40.116285678Z" level=info msg="StartContainer for \"3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978\" returns successfully" Feb 12 19:20:40.123792 env[1381]: time="2024-02-12T19:20:40.123750531Z" level=info msg="CreateContainer within sandbox \"ae30f5792e278c624c5ffff7899fec92515c677f1c4b8c1680bece8dce28a240\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3581007add7703fc3ae1be4e43ca346cc159d6706e32caef11e21a981d565760\"" Feb 12 19:20:40.124404 env[1381]: time="2024-02-12T19:20:40.124368774Z" level=info msg="StartContainer for \"3581007add7703fc3ae1be4e43ca346cc159d6706e32caef11e21a981d565760\"" Feb 12 19:20:40.125555 env[1381]: time="2024-02-12T19:20:40.125518364Z" level=info msg="CreateContainer within sandbox \"1db6740f7ba42644ffc0b48d33bfabc6e56ffa9b23f5fac200d01e2d78e4be07\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6\"" Feb 12 19:20:40.127844 env[1381]: time="2024-02-12T19:20:40.127094239Z" level=info msg="StartContainer for \"54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6\"" Feb 12 19:20:40.127923 kubelet[2124]: W0212 19:20:40.127291 2124 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:40.127923 kubelet[2124]: E0212 19:20:40.127358 2124 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.4:6443: connect: connection refused Feb 12 19:20:40.150500 systemd[1]: Started cri-containerd-54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6.scope. Feb 12 19:20:40.161817 systemd[1]: Started cri-containerd-3581007add7703fc3ae1be4e43ca346cc159d6706e32caef11e21a981d565760.scope. Feb 12 19:20:40.209527 env[1381]: time="2024-02-12T19:20:40.209458751Z" level=info msg="StartContainer for \"54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6\" returns successfully" Feb 12 19:20:40.233061 env[1381]: time="2024-02-12T19:20:40.232998818Z" level=info msg="StartContainer for \"3581007add7703fc3ae1be4e43ca346cc159d6706e32caef11e21a981d565760\" returns successfully" Feb 12 19:20:41.684679 kubelet[2124]: I0212 19:20:41.684655 2124 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:42.817229 kubelet[2124]: E0212 19:20:42.817187 2124 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-f75f2c89dc\" not found" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:42.882969 kubelet[2124]: I0212 19:20:42.882931 2124 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:43.564444 kubelet[2124]: I0212 19:20:43.564409 2124 apiserver.go:52] "Watching apiserver" Feb 12 19:20:43.576340 kubelet[2124]: I0212 19:20:43.576294 2124 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:20:43.604717 kubelet[2124]: I0212 19:20:43.604682 2124 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:20:44.776949 kubelet[2124]: W0212 19:20:44.776924 2124 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:20:45.409021 systemd[1]: Reloading. Feb 12 19:20:45.482533 /usr/lib/systemd/system-generators/torcx-generator[2411]: time="2024-02-12T19:20:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:20:45.482562 /usr/lib/systemd/system-generators/torcx-generator[2411]: time="2024-02-12T19:20:45Z" level=info msg="torcx already run" Feb 12 19:20:45.558011 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:20:45.558031 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:20:45.576716 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:20:45.673304 kubelet[2124]: I0212 19:20:45.672766 2124 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:20:45.672999 systemd[1]: Stopping kubelet.service... Feb 12 19:20:45.692878 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:20:45.693080 systemd[1]: Stopped kubelet.service. Feb 12 19:20:45.695192 systemd[1]: Started kubelet.service. Feb 12 19:20:45.779765 kubelet[2471]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:20:45.780049 kubelet[2471]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:20:45.780100 kubelet[2471]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:20:45.780340 kubelet[2471]: I0212 19:20:45.780241 2471 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:20:45.787777 kubelet[2471]: I0212 19:20:45.787748 2471 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:20:45.787777 kubelet[2471]: I0212 19:20:45.787775 2471 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:20:45.788015 kubelet[2471]: I0212 19:20:45.787996 2471 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:20:45.789512 kubelet[2471]: I0212 19:20:45.789488 2471 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:20:45.790334 kubelet[2471]: I0212 19:20:45.790318 2471 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:20:45.796642 kubelet[2471]: W0212 19:20:45.796620 2471 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 19:20:45.797233 kubelet[2471]: I0212 19:20:45.797217 2471 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:20:45.797427 kubelet[2471]: I0212 19:20:45.797415 2471 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:20:45.797518 kubelet[2471]: I0212 19:20:45.797504 2471 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:20:45.797604 kubelet[2471]: I0212 19:20:45.797526 2471 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:20:45.797604 kubelet[2471]: I0212 19:20:45.797536 2471 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:20:45.797604 kubelet[2471]: I0212 19:20:45.797566 2471 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:20:45.801908 kubelet[2471]: I0212 19:20:45.801888 2471 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:20:45.802020 kubelet[2471]: I0212 19:20:45.802010 2471 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:20:45.802093 kubelet[2471]: I0212 19:20:45.802084 2471 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:20:45.802156 kubelet[2471]: I0212 19:20:45.802147 2471 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:20:45.814224 kubelet[2471]: I0212 19:20:45.814206 2471 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:20:45.817740 kubelet[2471]: I0212 19:20:45.817720 2471 server.go:1168] "Started kubelet" Feb 12 19:20:45.818850 kubelet[2471]: I0212 19:20:45.818834 2471 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:20:45.841228 kubelet[2471]: I0212 19:20:45.841205 2471 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:20:45.841675 kubelet[2471]: I0212 19:20:45.841660 2471 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:20:45.843271 kubelet[2471]: I0212 19:20:45.843256 2471 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:20:45.843594 sudo[2487]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:20:45.843825 sudo[2487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:20:45.847202 kubelet[2471]: I0212 19:20:45.847185 2471 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:20:45.851463 kubelet[2471]: E0212 19:20:45.850631 2471 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:20:45.867147 kubelet[2471]: E0212 19:20:45.867125 2471 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:20:45.875515 kubelet[2471]: I0212 19:20:45.852406 2471 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:20:45.897202 kubelet[2471]: I0212 19:20:45.862438 2471 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:20:45.905880 kubelet[2471]: I0212 19:20:45.905839 2471 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:20:45.905880 kubelet[2471]: I0212 19:20:45.905878 2471 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:20:45.906314 kubelet[2471]: I0212 19:20:45.905894 2471 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:20:45.906314 kubelet[2471]: E0212 19:20:45.905955 2471 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:20:45.951556 kubelet[2471]: I0212 19:20:45.951214 2471 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:45.994186 kubelet[2471]: I0212 19:20:45.994152 2471 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:45.994391 kubelet[2471]: I0212 19:20:45.994381 2471 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.006554 kubelet[2471]: E0212 19:20:46.006528 2471 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 12 19:20:46.026240 kubelet[2471]: I0212 19:20:46.022023 2471 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:20:46.026240 kubelet[2471]: I0212 19:20:46.022053 2471 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:20:46.026240 kubelet[2471]: I0212 19:20:46.022072 2471 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:20:46.026240 kubelet[2471]: I0212 19:20:46.022271 2471 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:20:46.026240 kubelet[2471]: I0212 19:20:46.022287 2471 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 19:20:46.026240 kubelet[2471]: I0212 19:20:46.022294 2471 policy_none.go:49] "None policy: Start" Feb 12 19:20:46.026240 kubelet[2471]: I0212 19:20:46.023955 2471 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:20:46.026240 kubelet[2471]: I0212 19:20:46.023975 2471 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:20:46.026240 kubelet[2471]: I0212 19:20:46.024229 2471 state_mem.go:75] "Updated machine memory state" Feb 12 19:20:46.034123 kubelet[2471]: I0212 19:20:46.034103 2471 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:20:46.034488 kubelet[2471]: I0212 19:20:46.034461 2471 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:20:46.207375 kubelet[2471]: I0212 19:20:46.207275 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:20:46.207375 kubelet[2471]: I0212 19:20:46.207371 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:20:46.207547 kubelet[2471]: I0212 19:20:46.207406 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:20:46.215878 kubelet[2471]: W0212 19:20:46.215854 2471 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:20:46.219338 kubelet[2471]: W0212 19:20:46.219315 2471 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:20:46.223076 kubelet[2471]: W0212 19:20:46.223032 2471 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:20:46.223187 kubelet[2471]: E0212 19:20:46.223097 2471 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.358814 kubelet[2471]: I0212 19:20:46.358775 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.358814 kubelet[2471]: I0212 19:20:46.358818 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1c4fb393a269fbe45970811629edabd-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-f75f2c89dc\" (UID: \"d1c4fb393a269fbe45970811629edabd\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.358956 kubelet[2471]: I0212 19:20:46.358842 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/222a629b26639d91ba66f1699403f812-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-f75f2c89dc\" (UID: \"222a629b26639d91ba66f1699403f812\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.358956 kubelet[2471]: I0212 19:20:46.358862 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.358956 kubelet[2471]: I0212 19:20:46.358883 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.358956 kubelet[2471]: I0212 19:20:46.358901 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.358956 kubelet[2471]: I0212 19:20:46.358921 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a4b237f12c69cbf4b1a94e3995226c3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-f75f2c89dc\" (UID: \"5a4b237f12c69cbf4b1a94e3995226c3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.359088 kubelet[2471]: I0212 19:20:46.358939 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/222a629b26639d91ba66f1699403f812-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-f75f2c89dc\" (UID: \"222a629b26639d91ba66f1699403f812\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.359088 kubelet[2471]: I0212 19:20:46.358957 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/222a629b26639d91ba66f1699403f812-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-f75f2c89dc\" (UID: \"222a629b26639d91ba66f1699403f812\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-f75f2c89dc" Feb 12 19:20:46.418790 sudo[2487]: pam_unix(sudo:session): session closed for user root Feb 12 19:20:46.809624 kubelet[2471]: I0212 19:20:46.809589 2471 apiserver.go:52] "Watching apiserver" Feb 12 19:20:46.877447 kubelet[2471]: I0212 19:20:46.877418 2471 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:20:46.963204 kubelet[2471]: I0212 19:20:46.962956 2471 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:20:47.005152 kubelet[2471]: I0212 19:20:47.005125 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-f75f2c89dc" podStartSLOduration=1.00507828 podCreationTimestamp="2024-02-12 19:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:20:46.994303487 +0000 UTC m=+1.288780471" watchObservedRunningTime="2024-02-12 19:20:47.00507828 +0000 UTC m=+1.299555224" Feb 12 19:20:47.015882 kubelet[2471]: I0212 19:20:47.015854 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-f75f2c89dc" podStartSLOduration=3.015819 podCreationTimestamp="2024-02-12 19:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:20:47.006396466 +0000 UTC m=+1.300873410" watchObservedRunningTime="2024-02-12 19:20:47.015819 +0000 UTC m=+1.310295944" Feb 12 19:20:47.016126 kubelet[2471]: I0212 19:20:47.016114 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-f75f2c89dc" podStartSLOduration=1.016092007 podCreationTimestamp="2024-02-12 19:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:20:47.015211816 +0000 UTC m=+1.309688760" watchObservedRunningTime="2024-02-12 19:20:47.016092007 +0000 UTC m=+1.310568951" Feb 12 19:20:48.374049 sudo[1713]: pam_unix(sudo:session): session closed for user root Feb 12 19:20:48.452685 sshd[1710]: pam_unix(sshd:session): session closed for user core Feb 12 19:20:48.455397 systemd-logind[1370]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:20:48.455578 systemd[1]: sshd@4-10.200.20.4:22-10.200.12.6:32948.service: Deactivated successfully. Feb 12 19:20:48.456257 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:20:48.456436 systemd[1]: session-7.scope: Consumed 8.226s CPU time. Feb 12 19:20:48.457074 systemd-logind[1370]: Removed session 7. Feb 12 19:20:58.742608 kubelet[2471]: I0212 19:20:58.742582 2471 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:20:58.743460 env[1381]: time="2024-02-12T19:20:58.743405933Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:20:58.743750 kubelet[2471]: I0212 19:20:58.743693 2471 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:20:59.651729 kubelet[2471]: I0212 19:20:59.651695 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:20:59.656778 systemd[1]: Created slice kubepods-besteffort-pod71372a69_d6f8_40cb_ba49_b9672e3f6623.slice. Feb 12 19:20:59.678337 kubelet[2471]: I0212 19:20:59.678305 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:20:59.683240 systemd[1]: Created slice kubepods-burstable-pod18110bec_347e_47ff_a549_74df337db3e2.slice. Feb 12 19:20:59.713031 kubelet[2471]: I0212 19:20:59.712993 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:20:59.717935 systemd[1]: Created slice kubepods-besteffort-podc9a05f5e_b6d7_46b5_93b3_d6b8c1892565.slice. Feb 12 19:20:59.823049 kubelet[2471]: I0212 19:20:59.823017 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq7d9\" (UniqueName: \"kubernetes.io/projected/18110bec-347e-47ff-a549-74df337db3e2-kube-api-access-zq7d9\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.823525 kubelet[2471]: I0212 19:20:59.823512 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-xtables-lock\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.823631 kubelet[2471]: I0212 19:20:59.823622 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565-cilium-config-path\") pod \"cilium-operator-574c4bb98d-7zvvl\" (UID: \"c9a05f5e-b6d7-46b5-93b3-d6b8c1892565\") " pod="kube-system/cilium-operator-574c4bb98d-7zvvl" Feb 12 19:20:59.823783 kubelet[2471]: I0212 19:20:59.823763 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71372a69-d6f8-40cb-ba49-b9672e3f6623-kube-proxy\") pod \"kube-proxy-zwr4m\" (UID: \"71372a69-d6f8-40cb-ba49-b9672e3f6623\") " pod="kube-system/kube-proxy-zwr4m" Feb 12 19:20:59.823835 kubelet[2471]: I0212 19:20:59.823821 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cilium-cgroup\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.823864 kubelet[2471]: I0212 19:20:59.823845 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18110bec-347e-47ff-a549-74df337db3e2-hubble-tls\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.823889 kubelet[2471]: I0212 19:20:59.823874 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71372a69-d6f8-40cb-ba49-b9672e3f6623-xtables-lock\") pod \"kube-proxy-zwr4m\" (UID: \"71372a69-d6f8-40cb-ba49-b9672e3f6623\") " pod="kube-system/kube-proxy-zwr4m" Feb 12 19:20:59.823913 kubelet[2471]: I0212 19:20:59.823901 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-hostproc\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.823941 kubelet[2471]: I0212 19:20:59.823925 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-host-proc-sys-kernel\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.823966 kubelet[2471]: I0212 19:20:59.823954 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmv7t\" (UniqueName: \"kubernetes.io/projected/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565-kube-api-access-rmv7t\") pod \"cilium-operator-574c4bb98d-7zvvl\" (UID: \"c9a05f5e-b6d7-46b5-93b3-d6b8c1892565\") " pod="kube-system/cilium-operator-574c4bb98d-7zvvl" Feb 12 19:20:59.823991 kubelet[2471]: I0212 19:20:59.823983 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6gl\" (UniqueName: \"kubernetes.io/projected/71372a69-d6f8-40cb-ba49-b9672e3f6623-kube-api-access-kr6gl\") pod \"kube-proxy-zwr4m\" (UID: \"71372a69-d6f8-40cb-ba49-b9672e3f6623\") " pod="kube-system/kube-proxy-zwr4m" Feb 12 19:20:59.824017 kubelet[2471]: I0212 19:20:59.824004 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18110bec-347e-47ff-a549-74df337db3e2-cilium-config-path\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.824042 kubelet[2471]: I0212 19:20:59.824028 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-etc-cni-netd\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.824069 kubelet[2471]: I0212 19:20:59.824056 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18110bec-347e-47ff-a549-74df337db3e2-clustermesh-secrets\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.824094 kubelet[2471]: I0212 19:20:59.824075 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-host-proc-sys-net\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.824118 kubelet[2471]: I0212 19:20:59.824100 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71372a69-d6f8-40cb-ba49-b9672e3f6623-lib-modules\") pod \"kube-proxy-zwr4m\" (UID: \"71372a69-d6f8-40cb-ba49-b9672e3f6623\") " pod="kube-system/kube-proxy-zwr4m" Feb 12 19:20:59.824118 kubelet[2471]: I0212 19:20:59.824117 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cilium-run\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.824165 kubelet[2471]: I0212 19:20:59.824135 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-bpf-maps\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.824165 kubelet[2471]: I0212 19:20:59.824155 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cni-path\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.824213 kubelet[2471]: I0212 19:20:59.824184 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-lib-modules\") pod \"cilium-2tnnh\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " pod="kube-system/cilium-2tnnh" Feb 12 19:20:59.968430 env[1381]: time="2024-02-12T19:20:59.967342596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zwr4m,Uid:71372a69-d6f8-40cb-ba49-b9672e3f6623,Namespace:kube-system,Attempt:0,}" Feb 12 19:20:59.988118 env[1381]: time="2024-02-12T19:20:59.988068352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2tnnh,Uid:18110bec-347e-47ff-a549-74df337db3e2,Namespace:kube-system,Attempt:0,}" Feb 12 19:21:00.011564 env[1381]: time="2024-02-12T19:21:00.008377513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:21:00.011564 env[1381]: time="2024-02-12T19:21:00.008434841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:21:00.011564 env[1381]: time="2024-02-12T19:21:00.008445482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:21:00.011564 env[1381]: time="2024-02-12T19:21:00.008641828Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c909aef01ee597285c2513c3f98606290bba4367b2fa2f92443990b75a6b8f22 pid=2555 runtime=io.containerd.runc.v2 Feb 12 19:21:00.021310 env[1381]: time="2024-02-12T19:21:00.021273061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-7zvvl,Uid:c9a05f5e-b6d7-46b5-93b3-d6b8c1892565,Namespace:kube-system,Attempt:0,}" Feb 12 19:21:00.024311 systemd[1]: Started cri-containerd-c909aef01ee597285c2513c3f98606290bba4367b2fa2f92443990b75a6b8f22.scope. Feb 12 19:21:00.047040 env[1381]: time="2024-02-12T19:21:00.046713471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:21:00.047040 env[1381]: time="2024-02-12T19:21:00.047001149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:21:00.048560 env[1381]: time="2024-02-12T19:21:00.047155930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:21:00.048560 env[1381]: time="2024-02-12T19:21:00.047376359Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389 pid=2588 runtime=io.containerd.runc.v2 Feb 12 19:21:00.062667 env[1381]: time="2024-02-12T19:21:00.062595255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zwr4m,Uid:71372a69-d6f8-40cb-ba49-b9672e3f6623,Namespace:kube-system,Attempt:0,} returns sandbox id \"c909aef01ee597285c2513c3f98606290bba4367b2fa2f92443990b75a6b8f22\"" Feb 12 19:21:00.068408 env[1381]: time="2024-02-12T19:21:00.068355218Z" level=info msg="CreateContainer within sandbox \"c909aef01ee597285c2513c3f98606290bba4367b2fa2f92443990b75a6b8f22\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:21:00.081229 systemd[1]: Started cri-containerd-59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389.scope. Feb 12 19:21:00.090256 env[1381]: time="2024-02-12T19:21:00.090174628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:21:00.090256 env[1381]: time="2024-02-12T19:21:00.090219634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:21:00.090917 env[1381]: time="2024-02-12T19:21:00.090443943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:21:00.091085 env[1381]: time="2024-02-12T19:21:00.091033862Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6 pid=2621 runtime=io.containerd.runc.v2 Feb 12 19:21:00.106033 systemd[1]: Started cri-containerd-697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6.scope. Feb 12 19:21:00.115847 env[1381]: time="2024-02-12T19:21:00.115800662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2tnnh,Uid:18110bec-347e-47ff-a549-74df337db3e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\"" Feb 12 19:21:00.118877 env[1381]: time="2024-02-12T19:21:00.118285231Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:21:00.149109 env[1381]: time="2024-02-12T19:21:00.149063268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-7zvvl,Uid:c9a05f5e-b6d7-46b5-93b3-d6b8c1892565,Namespace:kube-system,Attempt:0,} returns sandbox id \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\"" Feb 12 19:21:00.151404 env[1381]: time="2024-02-12T19:21:00.150424288Z" level=info msg="CreateContainer within sandbox \"c909aef01ee597285c2513c3f98606290bba4367b2fa2f92443990b75a6b8f22\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"128616e8fe25e7279476626d5a6645471693c11024f0ffde1cf86b5717516b1d\"" Feb 12 19:21:00.152208 env[1381]: time="2024-02-12T19:21:00.152150277Z" level=info msg="StartContainer for \"128616e8fe25e7279476626d5a6645471693c11024f0ffde1cf86b5717516b1d\"" Feb 12 19:21:00.168378 systemd[1]: Started cri-containerd-128616e8fe25e7279476626d5a6645471693c11024f0ffde1cf86b5717516b1d.scope. Feb 12 19:21:00.211086 env[1381]: time="2024-02-12T19:21:00.211034957Z" level=info msg="StartContainer for \"128616e8fe25e7279476626d5a6645471693c11024f0ffde1cf86b5717516b1d\" returns successfully" Feb 12 19:21:00.997418 kubelet[2471]: I0212 19:21:00.997371 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zwr4m" podStartSLOduration=1.997331269 podCreationTimestamp="2024-02-12 19:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:21:00.996485917 +0000 UTC m=+15.290962861" watchObservedRunningTime="2024-02-12 19:21:00.997331269 +0000 UTC m=+15.291808213" Feb 12 19:21:05.074390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822931834.mount: Deactivated successfully. Feb 12 19:21:07.261327 env[1381]: time="2024-02-12T19:21:07.261276360Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:21:07.271657 env[1381]: time="2024-02-12T19:21:07.271605771Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:21:07.277692 env[1381]: time="2024-02-12T19:21:07.277656161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:21:07.278149 env[1381]: time="2024-02-12T19:21:07.278119295Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 12 19:21:07.279419 env[1381]: time="2024-02-12T19:21:07.279384123Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:21:07.282268 env[1381]: time="2024-02-12T19:21:07.282238018Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:21:07.312975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057521245.mount: Deactivated successfully. Feb 12 19:21:07.326220 env[1381]: time="2024-02-12T19:21:07.326164530Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\"" Feb 12 19:21:07.327118 env[1381]: time="2024-02-12T19:21:07.327091399Z" level=info msg="StartContainer for \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\"" Feb 12 19:21:07.345019 systemd[1]: Started cri-containerd-12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5.scope. Feb 12 19:21:07.380318 env[1381]: time="2024-02-12T19:21:07.380270876Z" level=info msg="StartContainer for \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\" returns successfully" Feb 12 19:21:07.384590 systemd[1]: cri-containerd-12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5.scope: Deactivated successfully. Feb 12 19:21:08.306906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5-rootfs.mount: Deactivated successfully. Feb 12 19:21:09.144709 env[1381]: time="2024-02-12T19:21:09.144646732Z" level=info msg="shim disconnected" id=12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5 Feb 12 19:21:09.144709 env[1381]: time="2024-02-12T19:21:09.144703259Z" level=warning msg="cleaning up after shim disconnected" id=12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5 namespace=k8s.io Feb 12 19:21:09.144709 env[1381]: time="2024-02-12T19:21:09.144712540Z" level=info msg="cleaning up dead shim" Feb 12 19:21:09.151431 env[1381]: time="2024-02-12T19:21:09.151386577Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2874 runtime=io.containerd.runc.v2\n" Feb 12 19:21:10.007460 env[1381]: time="2024-02-12T19:21:10.007339108Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:21:10.042190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547637709.mount: Deactivated successfully. Feb 12 19:21:10.058057 env[1381]: time="2024-02-12T19:21:10.058004930Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\"" Feb 12 19:21:10.058640 env[1381]: time="2024-02-12T19:21:10.058564233Z" level=info msg="StartContainer for \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\"" Feb 12 19:21:10.074519 systemd[1]: Started cri-containerd-a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1.scope. Feb 12 19:21:10.109046 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:21:10.109287 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:21:10.109504 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:21:10.109817 env[1381]: time="2024-02-12T19:21:10.109781316Z" level=info msg="StartContainer for \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\" returns successfully" Feb 12 19:21:10.111108 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:21:10.114740 systemd[1]: cri-containerd-a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1.scope: Deactivated successfully. Feb 12 19:21:10.123436 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:21:10.145565 env[1381]: time="2024-02-12T19:21:10.145517790Z" level=info msg="shim disconnected" id=a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1 Feb 12 19:21:10.145565 env[1381]: time="2024-02-12T19:21:10.145564755Z" level=warning msg="cleaning up after shim disconnected" id=a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1 namespace=k8s.io Feb 12 19:21:10.145942 env[1381]: time="2024-02-12T19:21:10.145575076Z" level=info msg="cleaning up dead shim" Feb 12 19:21:10.151358 env[1381]: time="2024-02-12T19:21:10.151314117Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2941 runtime=io.containerd.runc.v2\n" Feb 12 19:21:11.012437 env[1381]: time="2024-02-12T19:21:11.012345399Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:21:11.035744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1-rootfs.mount: Deactivated successfully. Feb 12 19:21:11.049197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1926944316.mount: Deactivated successfully. Feb 12 19:21:11.053315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575838689.mount: Deactivated successfully. Feb 12 19:21:11.080140 env[1381]: time="2024-02-12T19:21:11.080084891Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\"" Feb 12 19:21:11.081731 env[1381]: time="2024-02-12T19:21:11.080612869Z" level=info msg="StartContainer for \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\"" Feb 12 19:21:11.097758 systemd[1]: Started cri-containerd-137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc.scope. Feb 12 19:21:11.127898 systemd[1]: cri-containerd-137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc.scope: Deactivated successfully. Feb 12 19:21:11.133091 env[1381]: time="2024-02-12T19:21:11.131840185Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18110bec_347e_47ff_a549_74df337db3e2.slice/cri-containerd-137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc.scope/memory.events\": no such file or directory" Feb 12 19:21:11.139091 env[1381]: time="2024-02-12T19:21:11.139043138Z" level=info msg="StartContainer for \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\" returns successfully" Feb 12 19:21:11.195575 env[1381]: time="2024-02-12T19:21:11.193546054Z" level=info msg="shim disconnected" id=137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc Feb 12 19:21:11.195575 env[1381]: time="2024-02-12T19:21:11.193591419Z" level=warning msg="cleaning up after shim disconnected" id=137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc namespace=k8s.io Feb 12 19:21:11.195575 env[1381]: time="2024-02-12T19:21:11.193601060Z" level=info msg="cleaning up dead shim" Feb 12 19:21:11.200681 env[1381]: time="2024-02-12T19:21:11.200636234Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2998 runtime=io.containerd.runc.v2\n" Feb 12 19:21:11.670439 env[1381]: time="2024-02-12T19:21:11.670387515Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:21:11.679860 env[1381]: time="2024-02-12T19:21:11.679816792Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:21:11.683487 env[1381]: time="2024-02-12T19:21:11.683435670Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:21:11.684087 env[1381]: time="2024-02-12T19:21:11.684054618Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 12 19:21:11.689052 env[1381]: time="2024-02-12T19:21:11.689019845Z" level=info msg="CreateContainer within sandbox \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:21:11.718781 env[1381]: time="2024-02-12T19:21:11.718734274Z" level=info msg="CreateContainer within sandbox \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\"" Feb 12 19:21:11.719617 env[1381]: time="2024-02-12T19:21:11.719539002Z" level=info msg="StartContainer for \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\"" Feb 12 19:21:11.736886 systemd[1]: Started cri-containerd-56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82.scope. Feb 12 19:21:11.767797 env[1381]: time="2024-02-12T19:21:11.767631693Z" level=info msg="StartContainer for \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\" returns successfully" Feb 12 19:21:12.014882 env[1381]: time="2024-02-12T19:21:12.014774909Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:21:12.048446 kubelet[2471]: I0212 19:21:12.048403 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-7zvvl" podStartSLOduration=1.51425436 podCreationTimestamp="2024-02-12 19:20:59 +0000 UTC" firstStartedPulling="2024-02-12 19:21:00.15021046 +0000 UTC m=+14.444687364" lastFinishedPulling="2024-02-12 19:21:11.684320408 +0000 UTC m=+25.978797352" observedRunningTime="2024-02-12 19:21:12.021143679 +0000 UTC m=+26.315620623" watchObservedRunningTime="2024-02-12 19:21:12.048364348 +0000 UTC m=+26.342841292" Feb 12 19:21:12.059096 env[1381]: time="2024-02-12T19:21:12.059041865Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\"" Feb 12 19:21:12.059755 env[1381]: time="2024-02-12T19:21:12.059730060Z" level=info msg="StartContainer for \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\"" Feb 12 19:21:12.082725 systemd[1]: Started cri-containerd-b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6.scope. Feb 12 19:21:12.125843 systemd[1]: cri-containerd-b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6.scope: Deactivated successfully. Feb 12 19:21:12.134026 env[1381]: time="2024-02-12T19:21:12.133977663Z" level=info msg="StartContainer for \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\" returns successfully" Feb 12 19:21:12.423089 env[1381]: time="2024-02-12T19:21:12.423038140Z" level=info msg="shim disconnected" id=b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6 Feb 12 19:21:12.423089 env[1381]: time="2024-02-12T19:21:12.423084985Z" level=warning msg="cleaning up after shim disconnected" id=b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6 namespace=k8s.io Feb 12 19:21:12.423089 env[1381]: time="2024-02-12T19:21:12.423093986Z" level=info msg="cleaning up dead shim" Feb 12 19:21:12.433412 env[1381]: time="2024-02-12T19:21:12.433363459Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:21:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3088 runtime=io.containerd.runc.v2\n" Feb 12 19:21:13.018744 env[1381]: time="2024-02-12T19:21:13.018705488Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:21:13.036022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6-rootfs.mount: Deactivated successfully. Feb 12 19:21:13.078155 env[1381]: time="2024-02-12T19:21:13.078103707Z" level=info msg="CreateContainer within sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\"" Feb 12 19:21:13.078664 env[1381]: time="2024-02-12T19:21:13.078637684Z" level=info msg="StartContainer for \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\"" Feb 12 19:21:13.096237 systemd[1]: Started cri-containerd-907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866.scope. Feb 12 19:21:13.155214 env[1381]: time="2024-02-12T19:21:13.155164530Z" level=info msg="StartContainer for \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\" returns successfully" Feb 12 19:21:13.232544 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:21:13.318035 kubelet[2471]: I0212 19:21:13.317824 2471 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:21:13.346345 kubelet[2471]: I0212 19:21:13.346293 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:21:13.351001 systemd[1]: Created slice kubepods-burstable-podf321fec0_ead5_49c5_bc8f_e8fb3a63933c.slice. Feb 12 19:21:13.362207 kubelet[2471]: I0212 19:21:13.362172 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:21:13.366820 systemd[1]: Created slice kubepods-burstable-pod424959a4_582f_4261_b5c8_af3cfc085d21.slice. Feb 12 19:21:13.507328 kubelet[2471]: I0212 19:21:13.507288 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nksfw\" (UniqueName: \"kubernetes.io/projected/424959a4-582f-4261-b5c8-af3cfc085d21-kube-api-access-nksfw\") pod \"coredns-5d78c9869d-rqtn4\" (UID: \"424959a4-582f-4261-b5c8-af3cfc085d21\") " pod="kube-system/coredns-5d78c9869d-rqtn4" Feb 12 19:21:13.507502 kubelet[2471]: I0212 19:21:13.507382 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424959a4-582f-4261-b5c8-af3cfc085d21-config-volume\") pod \"coredns-5d78c9869d-rqtn4\" (UID: \"424959a4-582f-4261-b5c8-af3cfc085d21\") " pod="kube-system/coredns-5d78c9869d-rqtn4" Feb 12 19:21:13.507502 kubelet[2471]: I0212 19:21:13.507415 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw497\" (UniqueName: \"kubernetes.io/projected/f321fec0-ead5-49c5-bc8f-e8fb3a63933c-kube-api-access-gw497\") pod \"coredns-5d78c9869d-gd6kg\" (UID: \"f321fec0-ead5-49c5-bc8f-e8fb3a63933c\") " pod="kube-system/coredns-5d78c9869d-gd6kg" Feb 12 19:21:13.507563 kubelet[2471]: I0212 19:21:13.507514 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f321fec0-ead5-49c5-bc8f-e8fb3a63933c-config-volume\") pod \"coredns-5d78c9869d-gd6kg\" (UID: \"f321fec0-ead5-49c5-bc8f-e8fb3a63933c\") " pod="kube-system/coredns-5d78c9869d-gd6kg" Feb 12 19:21:13.648494 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 12 19:21:13.654316 env[1381]: time="2024-02-12T19:21:13.654276392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-gd6kg,Uid:f321fec0-ead5-49c5-bc8f-e8fb3a63933c,Namespace:kube-system,Attempt:0,}" Feb 12 19:21:13.671105 env[1381]: time="2024-02-12T19:21:13.671066744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-rqtn4,Uid:424959a4-582f-4261-b5c8-af3cfc085d21,Namespace:kube-system,Attempt:0,}" Feb 12 19:21:14.042408 kubelet[2471]: I0212 19:21:14.042313 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2tnnh" podStartSLOduration=7.881501998 podCreationTimestamp="2024-02-12 19:20:59 +0000 UTC" firstStartedPulling="2024-02-12 19:21:00.117735798 +0000 UTC m=+14.412212742" lastFinishedPulling="2024-02-12 19:21:07.278507061 +0000 UTC m=+21.572984005" observedRunningTime="2024-02-12 19:21:14.034197532 +0000 UTC m=+28.328674476" watchObservedRunningTime="2024-02-12 19:21:14.042273261 +0000 UTC m=+28.336750205" Feb 12 19:21:15.336883 systemd-networkd[1532]: cilium_host: Link UP Feb 12 19:21:15.338323 systemd-networkd[1532]: cilium_net: Link UP Feb 12 19:21:15.351832 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:21:15.351915 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:21:15.354945 systemd-networkd[1532]: cilium_net: Gained carrier Feb 12 19:21:15.355103 systemd-networkd[1532]: cilium_host: Gained carrier Feb 12 19:21:15.368607 systemd-networkd[1532]: cilium_host: Gained IPv6LL Feb 12 19:21:15.480269 systemd-networkd[1532]: cilium_vxlan: Link UP Feb 12 19:21:15.480276 systemd-networkd[1532]: cilium_vxlan: Gained carrier Feb 12 19:21:16.003497 kernel: NET: Registered PF_ALG protocol family Feb 12 19:21:16.170632 systemd-networkd[1532]: cilium_net: Gained IPv6LL Feb 12 19:21:16.703430 systemd-networkd[1532]: lxc_health: Link UP Feb 12 19:21:16.717095 systemd-networkd[1532]: lxc_health: Gained carrier Feb 12 19:21:16.717570 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:21:17.251816 systemd-networkd[1532]: lxc048664868582: Link UP Feb 12 19:21:17.263561 kernel: eth0: renamed from tmp09bda Feb 12 19:21:17.274777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc048664868582: link becomes ready Feb 12 19:21:17.274175 systemd-networkd[1532]: lxc048664868582: Gained carrier Feb 12 19:21:17.283761 systemd-networkd[1532]: lxc99828c570dc2: Link UP Feb 12 19:21:17.293495 kernel: eth0: renamed from tmp4e885 Feb 12 19:21:17.305596 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc99828c570dc2: link becomes ready Feb 12 19:21:17.305070 systemd-networkd[1532]: lxc99828c570dc2: Gained carrier Feb 12 19:21:17.450670 systemd-networkd[1532]: cilium_vxlan: Gained IPv6LL Feb 12 19:21:18.731675 systemd-networkd[1532]: lxc_health: Gained IPv6LL Feb 12 19:21:18.794664 systemd-networkd[1532]: lxc99828c570dc2: Gained IPv6LL Feb 12 19:21:19.114621 systemd-networkd[1532]: lxc048664868582: Gained IPv6LL Feb 12 19:21:20.823492 env[1381]: time="2024-02-12T19:21:20.818242238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:21:20.823492 env[1381]: time="2024-02-12T19:21:20.818320845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:21:20.823492 env[1381]: time="2024-02-12T19:21:20.818331847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:21:20.823492 env[1381]: time="2024-02-12T19:21:20.818518585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09bdabd988032663c5ead9119f46449a282fe6b142d3e5da8d09671b9cf32e7c pid=3642 runtime=io.containerd.runc.v2 Feb 12 19:21:20.840811 env[1381]: time="2024-02-12T19:21:20.840740413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:21:20.840961 env[1381]: time="2024-02-12T19:21:20.840781457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:21:20.840961 env[1381]: time="2024-02-12T19:21:20.840799259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:21:20.840961 env[1381]: time="2024-02-12T19:21:20.840929671Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e8854367aeb22f504e349f6351f359c8826b918d44df13664ef4a82440dea32 pid=3658 runtime=io.containerd.runc.v2 Feb 12 19:21:20.856207 systemd[1]: run-containerd-runc-k8s.io-09bdabd988032663c5ead9119f46449a282fe6b142d3e5da8d09671b9cf32e7c-runc.78D6sZ.mount: Deactivated successfully. Feb 12 19:21:20.860433 systemd[1]: Started cri-containerd-09bdabd988032663c5ead9119f46449a282fe6b142d3e5da8d09671b9cf32e7c.scope. Feb 12 19:21:20.872958 systemd[1]: Started cri-containerd-4e8854367aeb22f504e349f6351f359c8826b918d44df13664ef4a82440dea32.scope. Feb 12 19:21:20.909749 env[1381]: time="2024-02-12T19:21:20.909699799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-gd6kg,Uid:f321fec0-ead5-49c5-bc8f-e8fb3a63933c,Namespace:kube-system,Attempt:0,} returns sandbox id \"09bdabd988032663c5ead9119f46449a282fe6b142d3e5da8d09671b9cf32e7c\"" Feb 12 19:21:20.912998 env[1381]: time="2024-02-12T19:21:20.912960315Z" level=info msg="CreateContainer within sandbox \"09bdabd988032663c5ead9119f46449a282fe6b142d3e5da8d09671b9cf32e7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:21:20.919344 env[1381]: time="2024-02-12T19:21:20.919300568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-rqtn4,Uid:424959a4-582f-4261-b5c8-af3cfc085d21,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e8854367aeb22f504e349f6351f359c8826b918d44df13664ef4a82440dea32\"" Feb 12 19:21:20.925399 env[1381]: time="2024-02-12T19:21:20.925359553Z" level=info msg="CreateContainer within sandbox \"4e8854367aeb22f504e349f6351f359c8826b918d44df13664ef4a82440dea32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:21:20.961537 env[1381]: time="2024-02-12T19:21:20.961485686Z" level=info msg="CreateContainer within sandbox \"09bdabd988032663c5ead9119f46449a282fe6b142d3e5da8d09671b9cf32e7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a6e1303c228e0159d9180ce6c98715417e6b81b9d272b7940fde7deeb47bb6f\"" Feb 12 19:21:20.962272 env[1381]: time="2024-02-12T19:21:20.962243599Z" level=info msg="StartContainer for \"4a6e1303c228e0159d9180ce6c98715417e6b81b9d272b7940fde7deeb47bb6f\"" Feb 12 19:21:20.971621 env[1381]: time="2024-02-12T19:21:20.971568180Z" level=info msg="CreateContainer within sandbox \"4e8854367aeb22f504e349f6351f359c8826b918d44df13664ef4a82440dea32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78a569cbfe8bfe5efbecd39a7269844a60bc5978abd497273a1a19511628a5e0\"" Feb 12 19:21:20.972502 env[1381]: time="2024-02-12T19:21:20.972446105Z" level=info msg="StartContainer for \"78a569cbfe8bfe5efbecd39a7269844a60bc5978abd497273a1a19511628a5e0\"" Feb 12 19:21:20.989595 systemd[1]: Started cri-containerd-78a569cbfe8bfe5efbecd39a7269844a60bc5978abd497273a1a19511628a5e0.scope. Feb 12 19:21:21.001279 systemd[1]: Started cri-containerd-4a6e1303c228e0159d9180ce6c98715417e6b81b9d272b7940fde7deeb47bb6f.scope. Feb 12 19:21:21.039792 env[1381]: time="2024-02-12T19:21:21.039752324Z" level=info msg="StartContainer for \"4a6e1303c228e0159d9180ce6c98715417e6b81b9d272b7940fde7deeb47bb6f\" returns successfully" Feb 12 19:21:21.085637 env[1381]: time="2024-02-12T19:21:21.085491528Z" level=info msg="StartContainer for \"78a569cbfe8bfe5efbecd39a7269844a60bc5978abd497273a1a19511628a5e0\" returns successfully" Feb 12 19:21:22.063120 kubelet[2471]: I0212 19:21:22.063085 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-gd6kg" podStartSLOduration=23.063044968 podCreationTimestamp="2024-02-12 19:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:21:21.071164281 +0000 UTC m=+35.365641185" watchObservedRunningTime="2024-02-12 19:21:22.063044968 +0000 UTC m=+36.357521912" Feb 12 19:21:22.063584 kubelet[2471]: I0212 19:21:22.063555 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-rqtn4" podStartSLOduration=23.063532053 podCreationTimestamp="2024-02-12 19:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:21:22.058661755 +0000 UTC m=+36.353138699" watchObservedRunningTime="2024-02-12 19:21:22.063532053 +0000 UTC m=+36.358008997" Feb 12 19:21:23.958873 kubelet[2471]: I0212 19:21:23.958840 2471 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:23:11.877183 systemd[1]: Started sshd@5-10.200.20.4:22-10.200.12.6:38432.service. Feb 12 19:23:12.297110 sshd[3818]: Accepted publickey for core from 10.200.12.6 port 38432 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:12.298423 sshd[3818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:12.302670 systemd[1]: Started session-8.scope. Feb 12 19:23:12.302818 systemd-logind[1370]: New session 8 of user core. Feb 12 19:23:12.765954 sshd[3818]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:12.768539 systemd-logind[1370]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:23:12.768699 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:23:12.769486 systemd[1]: sshd@5-10.200.20.4:22-10.200.12.6:38432.service: Deactivated successfully. Feb 12 19:23:12.770706 systemd-logind[1370]: Removed session 8. Feb 12 19:23:17.834908 systemd[1]: Started sshd@6-10.200.20.4:22-10.200.12.6:54858.service. Feb 12 19:23:18.252814 sshd[3830]: Accepted publickey for core from 10.200.12.6 port 54858 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:18.254259 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:18.258734 systemd[1]: Started session-9.scope. Feb 12 19:23:18.259564 systemd-logind[1370]: New session 9 of user core. Feb 12 19:23:18.611302 sshd[3830]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:18.614184 systemd[1]: sshd@6-10.200.20.4:22-10.200.12.6:54858.service: Deactivated successfully. Feb 12 19:23:18.614961 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:23:18.615522 systemd-logind[1370]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:23:18.616232 systemd-logind[1370]: Removed session 9. Feb 12 19:23:23.681638 systemd[1]: Started sshd@7-10.200.20.4:22-10.200.12.6:54872.service. Feb 12 19:23:24.093532 sshd[3843]: Accepted publickey for core from 10.200.12.6 port 54872 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:24.095112 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:24.099280 systemd[1]: Started session-10.scope. Feb 12 19:23:24.099792 systemd-logind[1370]: New session 10 of user core. Feb 12 19:23:24.468738 sshd[3843]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:24.471247 systemd-logind[1370]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:23:24.471424 systemd[1]: sshd@7-10.200.20.4:22-10.200.12.6:54872.service: Deactivated successfully. Feb 12 19:23:24.472159 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:23:24.472999 systemd-logind[1370]: Removed session 10. Feb 12 19:23:29.538176 systemd[1]: Started sshd@8-10.200.20.4:22-10.200.12.6:49576.service. Feb 12 19:23:29.952565 sshd[3856]: Accepted publickey for core from 10.200.12.6 port 49576 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:29.953905 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:29.958320 systemd[1]: Started session-11.scope. Feb 12 19:23:29.959314 systemd-logind[1370]: New session 11 of user core. Feb 12 19:23:30.311947 sshd[3856]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:30.314873 systemd[1]: sshd@8-10.200.20.4:22-10.200.12.6:49576.service: Deactivated successfully. Feb 12 19:23:30.315695 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:23:30.316294 systemd-logind[1370]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:23:30.317169 systemd-logind[1370]: Removed session 11. Feb 12 19:23:30.382563 systemd[1]: Started sshd@9-10.200.20.4:22-10.200.12.6:49588.service. Feb 12 19:23:30.803575 sshd[3871]: Accepted publickey for core from 10.200.12.6 port 49588 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:30.805092 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:30.809290 systemd[1]: Started session-12.scope. Feb 12 19:23:30.810240 systemd-logind[1370]: New session 12 of user core. Feb 12 19:23:31.756858 sshd[3871]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:31.759593 systemd[1]: sshd@9-10.200.20.4:22-10.200.12.6:49588.service: Deactivated successfully. Feb 12 19:23:31.760686 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:23:31.761582 systemd-logind[1370]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:23:31.762516 systemd-logind[1370]: Removed session 12. Feb 12 19:23:31.827194 systemd[1]: Started sshd@10-10.200.20.4:22-10.200.12.6:49592.service. Feb 12 19:23:32.242419 sshd[3880]: Accepted publickey for core from 10.200.12.6 port 49592 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:32.243985 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:32.248262 systemd[1]: Started session-13.scope. Feb 12 19:23:32.249134 systemd-logind[1370]: New session 13 of user core. Feb 12 19:23:32.601177 sshd[3880]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:32.603594 systemd[1]: sshd@10-10.200.20.4:22-10.200.12.6:49592.service: Deactivated successfully. Feb 12 19:23:32.604329 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:23:32.604922 systemd-logind[1370]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:23:32.605613 systemd-logind[1370]: Removed session 13. Feb 12 19:23:37.676041 systemd[1]: Started sshd@11-10.200.20.4:22-10.200.12.6:40512.service. Feb 12 19:23:38.121436 sshd[3892]: Accepted publickey for core from 10.200.12.6 port 40512 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:38.123043 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:38.127301 systemd[1]: Started session-14.scope. Feb 12 19:23:38.127449 systemd-logind[1370]: New session 14 of user core. Feb 12 19:23:38.514230 sshd[3892]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:38.517192 systemd-logind[1370]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:23:38.517428 systemd[1]: sshd@11-10.200.20.4:22-10.200.12.6:40512.service: Deactivated successfully. Feb 12 19:23:38.518160 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:23:38.518921 systemd-logind[1370]: Removed session 14. Feb 12 19:23:38.584547 systemd[1]: Started sshd@12-10.200.20.4:22-10.200.12.6:40520.service. Feb 12 19:23:39.000628 sshd[3903]: Accepted publickey for core from 10.200.12.6 port 40520 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:39.001911 sshd[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:39.005550 systemd-logind[1370]: New session 15 of user core. Feb 12 19:23:39.006299 systemd[1]: Started session-15.scope. Feb 12 19:23:39.393942 sshd[3903]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:39.396525 systemd-logind[1370]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:23:39.396614 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:23:39.397265 systemd[1]: sshd@12-10.200.20.4:22-10.200.12.6:40520.service: Deactivated successfully. Feb 12 19:23:39.398336 systemd-logind[1370]: Removed session 15. Feb 12 19:23:39.463496 systemd[1]: Started sshd@13-10.200.20.4:22-10.200.12.6:40524.service. Feb 12 19:23:39.876184 sshd[3912]: Accepted publickey for core from 10.200.12.6 port 40524 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:39.877573 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:39.881775 systemd[1]: Started session-16.scope. Feb 12 19:23:39.882228 systemd-logind[1370]: New session 16 of user core. Feb 12 19:23:41.040413 sshd[3912]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:41.043588 systemd[1]: sshd@13-10.200.20.4:22-10.200.12.6:40524.service: Deactivated successfully. Feb 12 19:23:41.044325 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:23:41.045345 systemd-logind[1370]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:23:41.046097 systemd-logind[1370]: Removed session 16. Feb 12 19:23:41.110051 systemd[1]: Started sshd@14-10.200.20.4:22-10.200.12.6:40540.service. Feb 12 19:23:41.522225 sshd[3929]: Accepted publickey for core from 10.200.12.6 port 40540 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:41.523982 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:41.528252 systemd[1]: Started session-17.scope. Feb 12 19:23:41.529526 systemd-logind[1370]: New session 17 of user core. Feb 12 19:23:42.065956 sshd[3929]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:42.068561 systemd[1]: sshd@14-10.200.20.4:22-10.200.12.6:40540.service: Deactivated successfully. Feb 12 19:23:42.069772 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:23:42.070679 systemd-logind[1370]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:23:42.071629 systemd-logind[1370]: Removed session 17. Feb 12 19:23:42.135904 systemd[1]: Started sshd@15-10.200.20.4:22-10.200.12.6:40552.service. Feb 12 19:23:42.551296 sshd[3939]: Accepted publickey for core from 10.200.12.6 port 40552 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:42.553195 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:42.557328 systemd[1]: Started session-18.scope. Feb 12 19:23:42.558573 systemd-logind[1370]: New session 18 of user core. Feb 12 19:23:42.916329 sshd[3939]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:42.918880 systemd-logind[1370]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:23:42.919062 systemd[1]: sshd@15-10.200.20.4:22-10.200.12.6:40552.service: Deactivated successfully. Feb 12 19:23:42.919795 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:23:42.920707 systemd-logind[1370]: Removed session 18. Feb 12 19:23:47.986500 systemd[1]: Started sshd@16-10.200.20.4:22-10.200.12.6:36096.service. Feb 12 19:23:48.402266 sshd[3957]: Accepted publickey for core from 10.200.12.6 port 36096 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:48.403638 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:48.408663 systemd[1]: Started session-19.scope. Feb 12 19:23:48.409580 systemd-logind[1370]: New session 19 of user core. Feb 12 19:23:48.773994 sshd[3957]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:48.776664 systemd-logind[1370]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:23:48.777886 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:23:48.778926 systemd[1]: sshd@16-10.200.20.4:22-10.200.12.6:36096.service: Deactivated successfully. Feb 12 19:23:48.779948 systemd-logind[1370]: Removed session 19. Feb 12 19:23:53.845188 systemd[1]: Started sshd@17-10.200.20.4:22-10.200.12.6:36106.service. Feb 12 19:23:54.266236 sshd[3971]: Accepted publickey for core from 10.200.12.6 port 36106 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:23:54.267839 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:23:54.272184 systemd[1]: Started session-20.scope. Feb 12 19:23:54.272698 systemd-logind[1370]: New session 20 of user core. Feb 12 19:23:54.628350 sshd[3971]: pam_unix(sshd:session): session closed for user core Feb 12 19:23:54.630799 systemd-logind[1370]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:23:54.630959 systemd[1]: sshd@17-10.200.20.4:22-10.200.12.6:36106.service: Deactivated successfully. Feb 12 19:23:54.631769 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:23:54.632527 systemd-logind[1370]: Removed session 20. Feb 12 19:23:59.703808 systemd[1]: Started sshd@18-10.200.20.4:22-10.200.12.6:59912.service. Feb 12 19:24:00.151024 sshd[3983]: Accepted publickey for core from 10.200.12.6 port 59912 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:00.152339 sshd[3983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:00.156099 systemd-logind[1370]: New session 21 of user core. Feb 12 19:24:00.156566 systemd[1]: Started session-21.scope. Feb 12 19:24:00.535266 sshd[3983]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:00.537907 systemd-logind[1370]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:24:00.538094 systemd[1]: sshd@18-10.200.20.4:22-10.200.12.6:59912.service: Deactivated successfully. Feb 12 19:24:00.538831 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:24:00.539708 systemd-logind[1370]: Removed session 21. Feb 12 19:24:00.618215 systemd[1]: Started sshd@19-10.200.20.4:22-10.200.12.6:59926.service. Feb 12 19:24:01.064759 sshd[3997]: Accepted publickey for core from 10.200.12.6 port 59926 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:01.066364 sshd[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:01.070758 systemd[1]: Started session-22.scope. Feb 12 19:24:01.071460 systemd-logind[1370]: New session 22 of user core. Feb 12 19:24:03.966197 systemd[1]: run-containerd-runc-k8s.io-907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866-runc.CgEKC2.mount: Deactivated successfully. Feb 12 19:24:03.978755 env[1381]: time="2024-02-12T19:24:03.978717376Z" level=info msg="StopContainer for \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\" with timeout 30 (s)" Feb 12 19:24:03.980003 env[1381]: time="2024-02-12T19:24:03.979974096Z" level=info msg="Stop container \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\" with signal terminated" Feb 12 19:24:03.997856 env[1381]: time="2024-02-12T19:24:03.997802638Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:24:04.003031 env[1381]: time="2024-02-12T19:24:04.002998389Z" level=info msg="StopContainer for \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\" with timeout 1 (s)" Feb 12 19:24:04.003401 env[1381]: time="2024-02-12T19:24:04.003372005Z" level=info msg="Stop container \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\" with signal terminated" Feb 12 19:24:04.015061 systemd-networkd[1532]: lxc_health: Link DOWN Feb 12 19:24:04.015067 systemd-networkd[1532]: lxc_health: Lost carrier Feb 12 19:24:04.037025 systemd[1]: cri-containerd-56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82.scope: Deactivated successfully. Feb 12 19:24:04.060110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82-rootfs.mount: Deactivated successfully. Feb 12 19:24:04.060674 systemd[1]: cri-containerd-907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866.scope: Deactivated successfully. Feb 12 19:24:04.060956 systemd[1]: cri-containerd-907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866.scope: Consumed 6.343s CPU time. Feb 12 19:24:04.080963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866-rootfs.mount: Deactivated successfully. Feb 12 19:24:04.097956 env[1381]: time="2024-02-12T19:24:04.097913111Z" level=info msg="shim disconnected" id=56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82 Feb 12 19:24:04.098222 env[1381]: time="2024-02-12T19:24:04.098204293Z" level=warning msg="cleaning up after shim disconnected" id=56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82 namespace=k8s.io Feb 12 19:24:04.098289 env[1381]: time="2024-02-12T19:24:04.098276888Z" level=info msg="cleaning up dead shim" Feb 12 19:24:04.098951 env[1381]: time="2024-02-12T19:24:04.098903849Z" level=info msg="shim disconnected" id=907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866 Feb 12 19:24:04.099020 env[1381]: time="2024-02-12T19:24:04.098951766Z" level=warning msg="cleaning up after shim disconnected" id=907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866 namespace=k8s.io Feb 12 19:24:04.099020 env[1381]: time="2024-02-12T19:24:04.098960766Z" level=info msg="cleaning up dead shim" Feb 12 19:24:04.108949 env[1381]: time="2024-02-12T19:24:04.108910461Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4069 runtime=io.containerd.runc.v2\n" Feb 12 19:24:04.110159 env[1381]: time="2024-02-12T19:24:04.110122985Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4070 runtime=io.containerd.runc.v2\n" Feb 12 19:24:04.114155 env[1381]: time="2024-02-12T19:24:04.114123414Z" level=info msg="StopContainer for \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\" returns successfully" Feb 12 19:24:04.114924 env[1381]: time="2024-02-12T19:24:04.114897965Z" level=info msg="StopPodSandbox for \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\"" Feb 12 19:24:04.115084 env[1381]: time="2024-02-12T19:24:04.115063395Z" level=info msg="Container to stop \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.116274 env[1381]: time="2024-02-12T19:24:04.116229722Z" level=info msg="StopContainer for \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\" returns successfully" Feb 12 19:24:04.116741 env[1381]: time="2024-02-12T19:24:04.116658055Z" level=info msg="StopPodSandbox for \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\"" Feb 12 19:24:04.116872 env[1381]: time="2024-02-12T19:24:04.116852723Z" level=info msg="Container to stop \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.116938 env[1381]: time="2024-02-12T19:24:04.116922398Z" level=info msg="Container to stop \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.116995 env[1381]: time="2024-02-12T19:24:04.116980355Z" level=info msg="Container to stop \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.117056 env[1381]: time="2024-02-12T19:24:04.117040231Z" level=info msg="Container to stop \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.117117 env[1381]: time="2024-02-12T19:24:04.117100987Z" level=info msg="Container to stop \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:04.121509 systemd[1]: cri-containerd-59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389.scope: Deactivated successfully. Feb 12 19:24:04.122862 systemd[1]: cri-containerd-697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6.scope: Deactivated successfully. Feb 12 19:24:04.157046 env[1381]: time="2024-02-12T19:24:04.157000323Z" level=info msg="shim disconnected" id=59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389 Feb 12 19:24:04.157260 env[1381]: time="2024-02-12T19:24:04.157241868Z" level=warning msg="cleaning up after shim disconnected" id=59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389 namespace=k8s.io Feb 12 19:24:04.157320 env[1381]: time="2024-02-12T19:24:04.157306903Z" level=info msg="cleaning up dead shim" Feb 12 19:24:04.157827 env[1381]: time="2024-02-12T19:24:04.157345061Z" level=info msg="shim disconnected" id=697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6 Feb 12 19:24:04.157902 env[1381]: time="2024-02-12T19:24:04.157823591Z" level=warning msg="cleaning up after shim disconnected" id=697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6 namespace=k8s.io Feb 12 19:24:04.157902 env[1381]: time="2024-02-12T19:24:04.157840670Z" level=info msg="cleaning up dead shim" Feb 12 19:24:04.166611 env[1381]: time="2024-02-12T19:24:04.166574842Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4130 runtime=io.containerd.runc.v2\n" Feb 12 19:24:04.166756 env[1381]: time="2024-02-12T19:24:04.166577042Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4131 runtime=io.containerd.runc.v2\n" Feb 12 19:24:04.167049 env[1381]: time="2024-02-12T19:24:04.167009334Z" level=info msg="TearDown network for sandbox \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\" successfully" Feb 12 19:24:04.167049 env[1381]: time="2024-02-12T19:24:04.167041772Z" level=info msg="StopPodSandbox for \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\" returns successfully" Feb 12 19:24:04.167149 env[1381]: time="2024-02-12T19:24:04.167030693Z" level=info msg="TearDown network for sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" successfully" Feb 12 19:24:04.167149 env[1381]: time="2024-02-12T19:24:04.167143686Z" level=info msg="StopPodSandbox for \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" returns successfully" Feb 12 19:24:04.333484 kubelet[2471]: I0212 19:24:04.333441 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18110bec-347e-47ff-a549-74df337db3e2-cilium-config-path\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.333891 kubelet[2471]: I0212 19:24:04.333876 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq7d9\" (UniqueName: \"kubernetes.io/projected/18110bec-347e-47ff-a549-74df337db3e2-kube-api-access-zq7d9\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.333980 kubelet[2471]: I0212 19:24:04.333971 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18110bec-347e-47ff-a549-74df337db3e2-clustermesh-secrets\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334052 kubelet[2471]: I0212 19:24:04.334041 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cilium-run\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334127 kubelet[2471]: I0212 19:24:04.334118 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-lib-modules\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334197 kubelet[2471]: I0212 19:24:04.334188 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-etc-cni-netd\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334269 kubelet[2471]: I0212 19:24:04.334261 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-host-proc-sys-kernel\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334342 kubelet[2471]: I0212 19:24:04.334333 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-bpf-maps\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334415 kubelet[2471]: I0212 19:24:04.334405 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-xtables-lock\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334520 kubelet[2471]: I0212 19:24:04.334508 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cilium-cgroup\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334601 kubelet[2471]: I0212 19:24:04.334591 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-hostproc\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334668 kubelet[2471]: I0212 19:24:04.334659 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-host-proc-sys-net\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334747 kubelet[2471]: I0212 19:24:04.334738 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmv7t\" (UniqueName: \"kubernetes.io/projected/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565-kube-api-access-rmv7t\") pod \"c9a05f5e-b6d7-46b5-93b3-d6b8c1892565\" (UID: \"c9a05f5e-b6d7-46b5-93b3-d6b8c1892565\") " Feb 12 19:24:04.334821 kubelet[2471]: I0212 19:24:04.334812 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565-cilium-config-path\") pod \"c9a05f5e-b6d7-46b5-93b3-d6b8c1892565\" (UID: \"c9a05f5e-b6d7-46b5-93b3-d6b8c1892565\") " Feb 12 19:24:04.334895 kubelet[2471]: I0212 19:24:04.334886 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cni-path\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.334970 kubelet[2471]: I0212 19:24:04.334961 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18110bec-347e-47ff-a549-74df337db3e2-hubble-tls\") pod \"18110bec-347e-47ff-a549-74df337db3e2\" (UID: \"18110bec-347e-47ff-a549-74df337db3e2\") " Feb 12 19:24:04.336414 kubelet[2471]: I0212 19:24:04.336375 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18110bec-347e-47ff-a549-74df337db3e2-kube-api-access-zq7d9" (OuterVolumeSpecName: "kube-api-access-zq7d9") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "kube-api-access-zq7d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:04.336530 kubelet[2471]: I0212 19:24:04.336438 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.336530 kubelet[2471]: W0212 19:24:04.333791 2471 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/18110bec-347e-47ff-a549-74df337db3e2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:04.337791 kubelet[2471]: I0212 19:24:04.337757 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18110bec-347e-47ff-a549-74df337db3e2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:04.337921 kubelet[2471]: I0212 19:24:04.337907 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.338029 kubelet[2471]: I0212 19:24:04.338004 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.338116 kubelet[2471]: I0212 19:24:04.338084 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18110bec-347e-47ff-a549-74df337db3e2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:04.338173 kubelet[2471]: I0212 19:24:04.338093 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-hostproc" (OuterVolumeSpecName: "hostproc") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.338260 kubelet[2471]: I0212 19:24:04.338247 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.340445 kubelet[2471]: I0212 19:24:04.340410 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565-kube-api-access-rmv7t" (OuterVolumeSpecName: "kube-api-access-rmv7t") pod "c9a05f5e-b6d7-46b5-93b3-d6b8c1892565" (UID: "c9a05f5e-b6d7-46b5-93b3-d6b8c1892565"). InnerVolumeSpecName "kube-api-access-rmv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:04.340592 kubelet[2471]: I0212 19:24:04.340521 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.340977 kubelet[2471]: I0212 19:24:04.340537 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.341078 kubelet[2471]: I0212 19:24:04.340549 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.341393 kubelet[2471]: I0212 19:24:04.340560 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.341517 kubelet[2471]: I0212 19:24:04.340614 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18110bec-347e-47ff-a549-74df337db3e2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:04.341598 kubelet[2471]: W0212 19:24:04.340832 2471 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:04.342294 kubelet[2471]: I0212 19:24:04.340904 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cni-path" (OuterVolumeSpecName: "cni-path") pod "18110bec-347e-47ff-a549-74df337db3e2" (UID: "18110bec-347e-47ff-a549-74df337db3e2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:04.342834 kubelet[2471]: I0212 19:24:04.342808 2471 scope.go:115] "RemoveContainer" containerID="56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82" Feb 12 19:24:04.344262 kubelet[2471]: I0212 19:24:04.344233 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c9a05f5e-b6d7-46b5-93b3-d6b8c1892565" (UID: "c9a05f5e-b6d7-46b5-93b3-d6b8c1892565"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:04.346911 env[1381]: time="2024-02-12T19:24:04.346533707Z" level=info msg="RemoveContainer for \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\"" Feb 12 19:24:04.355373 systemd[1]: Removed slice kubepods-burstable-pod18110bec_347e_47ff_a549_74df337db3e2.slice. Feb 12 19:24:04.355451 systemd[1]: kubepods-burstable-pod18110bec_347e_47ff_a549_74df337db3e2.slice: Consumed 6.428s CPU time. Feb 12 19:24:04.359132 env[1381]: time="2024-02-12T19:24:04.358972526Z" level=info msg="RemoveContainer for \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\" returns successfully" Feb 12 19:24:04.360560 kubelet[2471]: I0212 19:24:04.360535 2471 scope.go:115] "RemoveContainer" containerID="56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82" Feb 12 19:24:04.361733 env[1381]: time="2024-02-12T19:24:04.361640399Z" level=error msg="ContainerStatus for \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\": not found" Feb 12 19:24:04.361916 kubelet[2471]: E0212 19:24:04.361889 2471 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\": not found" containerID="56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82" Feb 12 19:24:04.361997 kubelet[2471]: I0212 19:24:04.361925 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82} err="failed to get container status \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\": rpc error: code = NotFound desc = an error occurred when try to find container \"56f96c404fee5a08ff4b0a1f9f935fff71a6bc291f4eea5cf92eaba071884d82\": not found" Feb 12 19:24:04.361997 kubelet[2471]: I0212 19:24:04.361937 2471 scope.go:115] "RemoveContainer" containerID="907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866" Feb 12 19:24:04.364365 env[1381]: time="2024-02-12T19:24:04.364329790Z" level=info msg="RemoveContainer for \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\"" Feb 12 19:24:04.375854 env[1381]: time="2024-02-12T19:24:04.375815709Z" level=info msg="RemoveContainer for \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\" returns successfully" Feb 12 19:24:04.376070 kubelet[2471]: I0212 19:24:04.376039 2471 scope.go:115] "RemoveContainer" containerID="b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6" Feb 12 19:24:04.376993 env[1381]: time="2024-02-12T19:24:04.376962197Z" level=info msg="RemoveContainer for \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\"" Feb 12 19:24:04.386863 env[1381]: time="2024-02-12T19:24:04.386823818Z" level=info msg="RemoveContainer for \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\" returns successfully" Feb 12 19:24:04.387072 kubelet[2471]: I0212 19:24:04.387024 2471 scope.go:115] "RemoveContainer" containerID="137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc" Feb 12 19:24:04.388257 env[1381]: time="2024-02-12T19:24:04.387998144Z" level=info msg="RemoveContainer for \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\"" Feb 12 19:24:04.398487 env[1381]: time="2024-02-12T19:24:04.398343335Z" level=info msg="RemoveContainer for \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\" returns successfully" Feb 12 19:24:04.398622 kubelet[2471]: I0212 19:24:04.398596 2471 scope.go:115] "RemoveContainer" containerID="a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1" Feb 12 19:24:04.399577 env[1381]: time="2024-02-12T19:24:04.399553339Z" level=info msg="RemoveContainer for \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\"" Feb 12 19:24:04.408290 env[1381]: time="2024-02-12T19:24:04.408254993Z" level=info msg="RemoveContainer for \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\" returns successfully" Feb 12 19:24:04.408599 kubelet[2471]: I0212 19:24:04.408575 2471 scope.go:115] "RemoveContainer" containerID="12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5" Feb 12 19:24:04.409593 env[1381]: time="2024-02-12T19:24:04.409572070Z" level=info msg="RemoveContainer for \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\"" Feb 12 19:24:04.417162 env[1381]: time="2024-02-12T19:24:04.417128116Z" level=info msg="RemoveContainer for \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\" returns successfully" Feb 12 19:24:04.417456 kubelet[2471]: I0212 19:24:04.417432 2471 scope.go:115] "RemoveContainer" containerID="907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866" Feb 12 19:24:04.417730 env[1381]: time="2024-02-12T19:24:04.417676522Z" level=error msg="ContainerStatus for \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\": not found" Feb 12 19:24:04.417865 kubelet[2471]: E0212 19:24:04.417844 2471 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\": not found" containerID="907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866" Feb 12 19:24:04.417913 kubelet[2471]: I0212 19:24:04.417880 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866} err="failed to get container status \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\": rpc error: code = NotFound desc = an error occurred when try to find container \"907d87c0fc85932856821362ff3e3284a8e4849eec0f8749b3c7deff843f2866\": not found" Feb 12 19:24:04.417913 kubelet[2471]: I0212 19:24:04.417892 2471 scope.go:115] "RemoveContainer" containerID="b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6" Feb 12 19:24:04.418076 env[1381]: time="2024-02-12T19:24:04.418024740Z" level=error msg="ContainerStatus for \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\": not found" Feb 12 19:24:04.418189 kubelet[2471]: E0212 19:24:04.418154 2471 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\": not found" containerID="b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6" Feb 12 19:24:04.418239 kubelet[2471]: I0212 19:24:04.418202 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6} err="failed to get container status \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0bfd5a3fe1beac46543cae2e6442c3e14323218c591f7b8bca23b772289ceb6\": not found" Feb 12 19:24:04.418239 kubelet[2471]: I0212 19:24:04.418212 2471 scope.go:115] "RemoveContainer" containerID="137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc" Feb 12 19:24:04.418451 env[1381]: time="2024-02-12T19:24:04.418407956Z" level=error msg="ContainerStatus for \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\": not found" Feb 12 19:24:04.418671 kubelet[2471]: E0212 19:24:04.418649 2471 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\": not found" containerID="137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc" Feb 12 19:24:04.418671 kubelet[2471]: I0212 19:24:04.418677 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc} err="failed to get container status \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"137b809032c9fbdb4fda0278837f5ab23d6f6b179be6c09185e55907567093dc\": not found" Feb 12 19:24:04.418805 kubelet[2471]: I0212 19:24:04.418686 2471 scope.go:115] "RemoveContainer" containerID="a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1" Feb 12 19:24:04.418874 env[1381]: time="2024-02-12T19:24:04.418822530Z" level=error msg="ContainerStatus for \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\": not found" Feb 12 19:24:04.419002 kubelet[2471]: E0212 19:24:04.418978 2471 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\": not found" containerID="a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1" Feb 12 19:24:04.419045 kubelet[2471]: I0212 19:24:04.419006 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1} err="failed to get container status \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8ace69797ef4451b08622387cee459124c4dfac4bfc764640389980d70710f1\": not found" Feb 12 19:24:04.419045 kubelet[2471]: I0212 19:24:04.419016 2471 scope.go:115] "RemoveContainer" containerID="12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5" Feb 12 19:24:04.419264 env[1381]: time="2024-02-12T19:24:04.419220585Z" level=error msg="ContainerStatus for \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\": not found" Feb 12 19:24:04.419449 kubelet[2471]: E0212 19:24:04.419422 2471 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\": not found" containerID="12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5" Feb 12 19:24:04.419449 kubelet[2471]: I0212 19:24:04.419449 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5} err="failed to get container status \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\": rpc error: code = NotFound desc = an error occurred when try to find container \"12b2d62eeb28b84cec9ddcd7e62b626fe58f50a82d646c201988e9fc57525ea5\": not found" Feb 12 19:24:04.435655 kubelet[2471]: I0212 19:24:04.435626 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cilium-cgroup\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435655 kubelet[2471]: I0212 19:24:04.435654 2471 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-hostproc\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435763 kubelet[2471]: I0212 19:24:04.435668 2471 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435763 kubelet[2471]: I0212 19:24:04.435678 2471 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-bpf-maps\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435763 kubelet[2471]: I0212 19:24:04.435688 2471 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-xtables-lock\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435763 kubelet[2471]: I0212 19:24:04.435697 2471 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cni-path\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435763 kubelet[2471]: I0212 19:24:04.435706 2471 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-host-proc-sys-net\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435763 kubelet[2471]: I0212 19:24:04.435717 2471 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rmv7t\" (UniqueName: \"kubernetes.io/projected/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565-kube-api-access-rmv7t\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435763 kubelet[2471]: I0212 19:24:04.435727 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565-cilium-config-path\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435763 kubelet[2471]: I0212 19:24:04.435736 2471 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18110bec-347e-47ff-a549-74df337db3e2-hubble-tls\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435937 kubelet[2471]: I0212 19:24:04.435747 2471 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zq7d9\" (UniqueName: \"kubernetes.io/projected/18110bec-347e-47ff-a549-74df337db3e2-kube-api-access-zq7d9\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435937 kubelet[2471]: I0212 19:24:04.435757 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18110bec-347e-47ff-a549-74df337db3e2-cilium-config-path\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435937 kubelet[2471]: I0212 19:24:04.435766 2471 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-lib-modules\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435937 kubelet[2471]: I0212 19:24:04.435776 2471 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18110bec-347e-47ff-a549-74df337db3e2-clustermesh-secrets\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435937 kubelet[2471]: I0212 19:24:04.435785 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-cilium-run\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.435937 kubelet[2471]: I0212 19:24:04.435795 2471 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18110bec-347e-47ff-a549-74df337db3e2-etc-cni-netd\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:04.647269 systemd[1]: Removed slice kubepods-besteffort-podc9a05f5e_b6d7_46b5_93b3_d6b8c1892565.slice. Feb 12 19:24:04.954717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6-rootfs.mount: Deactivated successfully. Feb 12 19:24:04.954810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6-shm.mount: Deactivated successfully. Feb 12 19:24:04.954867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389-rootfs.mount: Deactivated successfully. Feb 12 19:24:04.954917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389-shm.mount: Deactivated successfully. Feb 12 19:24:04.954968 systemd[1]: var-lib-kubelet-pods-c9a05f5e\x2db6d7\x2d46b5\x2d93b3\x2dd6b8c1892565-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmv7t.mount: Deactivated successfully. Feb 12 19:24:04.955019 systemd[1]: var-lib-kubelet-pods-18110bec\x2d347e\x2d47ff\x2da549\x2d74df337db3e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzq7d9.mount: Deactivated successfully. Feb 12 19:24:04.955069 systemd[1]: var-lib-kubelet-pods-18110bec\x2d347e\x2d47ff\x2da549\x2d74df337db3e2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:24:04.955116 systemd[1]: var-lib-kubelet-pods-18110bec\x2d347e\x2d47ff\x2da549\x2d74df337db3e2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:05.869314 sshd[3997]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:05.871923 systemd-logind[1370]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:24:05.872678 systemd[1]: sshd@19-10.200.20.4:22-10.200.12.6:59926.service: Deactivated successfully. Feb 12 19:24:05.873415 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:24:05.873627 systemd[1]: session-22.scope: Consumed 1.890s CPU time. Feb 12 19:24:05.873897 systemd-logind[1370]: Removed session 22. Feb 12 19:24:05.909663 kubelet[2471]: I0212 19:24:05.909638 2471 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=18110bec-347e-47ff-a549-74df337db3e2 path="/var/lib/kubelet/pods/18110bec-347e-47ff-a549-74df337db3e2/volumes" Feb 12 19:24:05.910604 kubelet[2471]: I0212 19:24:05.910591 2471 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=c9a05f5e-b6d7-46b5-93b3-d6b8c1892565 path="/var/lib/kubelet/pods/c9a05f5e-b6d7-46b5-93b3-d6b8c1892565/volumes" Feb 12 19:24:05.939050 systemd[1]: Started sshd@20-10.200.20.4:22-10.200.12.6:59930.service. Feb 12 19:24:06.077123 kubelet[2471]: E0212 19:24:06.076387 2471 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:24:06.352495 sshd[4161]: Accepted publickey for core from 10.200.12.6 port 59930 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:06.357162 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:06.361590 systemd[1]: Started session-23.scope. Feb 12 19:24:06.362563 systemd-logind[1370]: New session 23 of user core. Feb 12 19:24:07.541843 kubelet[2471]: I0212 19:24:07.541798 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:24:07.542166 kubelet[2471]: E0212 19:24:07.541861 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18110bec-347e-47ff-a549-74df337db3e2" containerName="mount-cgroup" Feb 12 19:24:07.542166 kubelet[2471]: E0212 19:24:07.541871 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18110bec-347e-47ff-a549-74df337db3e2" containerName="apply-sysctl-overwrites" Feb 12 19:24:07.542166 kubelet[2471]: E0212 19:24:07.541877 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18110bec-347e-47ff-a549-74df337db3e2" containerName="mount-bpf-fs" Feb 12 19:24:07.542166 kubelet[2471]: E0212 19:24:07.541884 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c9a05f5e-b6d7-46b5-93b3-d6b8c1892565" containerName="cilium-operator" Feb 12 19:24:07.542166 kubelet[2471]: E0212 19:24:07.541891 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18110bec-347e-47ff-a549-74df337db3e2" containerName="clean-cilium-state" Feb 12 19:24:07.542166 kubelet[2471]: E0212 19:24:07.541900 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18110bec-347e-47ff-a549-74df337db3e2" containerName="cilium-agent" Feb 12 19:24:07.542166 kubelet[2471]: I0212 19:24:07.541920 2471 memory_manager.go:346] "RemoveStaleState removing state" podUID="18110bec-347e-47ff-a549-74df337db3e2" containerName="cilium-agent" Feb 12 19:24:07.542166 kubelet[2471]: I0212 19:24:07.541927 2471 memory_manager.go:346] "RemoveStaleState removing state" podUID="c9a05f5e-b6d7-46b5-93b3-d6b8c1892565" containerName="cilium-operator" Feb 12 19:24:07.546591 systemd[1]: Created slice kubepods-burstable-pod2ee82e1b_171c_4f7d_8837_fa5e65aecc60.slice. Feb 12 19:24:07.552333 kubelet[2471]: W0212 19:24:07.552291 2471 reflector.go:533] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-f75f2c89dc" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f75f2c89dc' and this object Feb 12 19:24:07.552333 kubelet[2471]: E0212 19:24:07.552330 2471 reflector.go:148] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-f75f2c89dc" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f75f2c89dc' and this object Feb 12 19:24:07.552515 kubelet[2471]: W0212 19:24:07.552371 2471 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-f75f2c89dc" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f75f2c89dc' and this object Feb 12 19:24:07.552515 kubelet[2471]: E0212 19:24:07.552381 2471 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-f75f2c89dc" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f75f2c89dc' and this object Feb 12 19:24:07.552515 kubelet[2471]: W0212 19:24:07.552411 2471 reflector.go:533] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-f75f2c89dc" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f75f2c89dc' and this object Feb 12 19:24:07.552515 kubelet[2471]: E0212 19:24:07.552420 2471 reflector.go:148] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-f75f2c89dc" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f75f2c89dc' and this object Feb 12 19:24:07.552515 kubelet[2471]: W0212 19:24:07.552450 2471 reflector.go:533] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-f75f2c89dc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f75f2c89dc' and this object Feb 12 19:24:07.552654 kubelet[2471]: E0212 19:24:07.552460 2471 reflector.go:148] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-f75f2c89dc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-f75f2c89dc' and this object Feb 12 19:24:07.580828 sshd[4161]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:07.584553 systemd[1]: sshd@20-10.200.20.4:22-10.200.12.6:59930.service: Deactivated successfully. Feb 12 19:24:07.585295 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:24:07.586431 systemd-logind[1370]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:24:07.587454 systemd-logind[1370]: Removed session 23. Feb 12 19:24:07.654621 kubelet[2471]: I0212 19:24:07.654593 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-config-path\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.654825 kubelet[2471]: I0212 19:24:07.654813 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-lib-modules\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.654925 kubelet[2471]: I0212 19:24:07.654915 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-ipsec-secrets\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655514 kubelet[2471]: I0212 19:24:07.655045 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-hostproc\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655514 kubelet[2471]: I0212 19:24:07.655110 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4kc9\" (UniqueName: \"kubernetes.io/projected/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-kube-api-access-p4kc9\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655514 kubelet[2471]: I0212 19:24:07.655139 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-hubble-tls\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655514 kubelet[2471]: I0212 19:24:07.655178 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-etc-cni-netd\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655514 kubelet[2471]: I0212 19:24:07.655201 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-bpf-maps\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655514 kubelet[2471]: I0212 19:24:07.655234 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-cgroup\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655720 kubelet[2471]: I0212 19:24:07.655261 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-xtables-lock\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655720 kubelet[2471]: I0212 19:24:07.655290 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-run\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655720 kubelet[2471]: I0212 19:24:07.655314 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-host-proc-sys-net\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655720 kubelet[2471]: I0212 19:24:07.655341 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-host-proc-sys-kernel\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655720 kubelet[2471]: I0212 19:24:07.655365 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cni-path\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.655720 kubelet[2471]: I0212 19:24:07.655385 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-clustermesh-secrets\") pod \"cilium-qplkk\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " pod="kube-system/cilium-qplkk" Feb 12 19:24:07.657821 systemd[1]: Started sshd@21-10.200.20.4:22-10.200.12.6:49424.service. Feb 12 19:24:08.103889 sshd[4171]: Accepted publickey for core from 10.200.12.6 port 49424 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:08.105447 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:08.109780 systemd[1]: Started session-24.scope. Feb 12 19:24:08.110428 systemd-logind[1370]: New session 24 of user core. Feb 12 19:24:08.507987 sshd[4171]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:08.510875 systemd[1]: sshd@21-10.200.20.4:22-10.200.12.6:49424.service: Deactivated successfully. Feb 12 19:24:08.511641 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:24:08.512185 systemd-logind[1370]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:24:08.512886 systemd-logind[1370]: Removed session 24. Feb 12 19:24:08.579154 systemd[1]: Started sshd@22-10.200.20.4:22-10.200.12.6:49440.service. Feb 12 19:24:08.756742 kubelet[2471]: E0212 19:24:08.756714 2471 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:24:08.757156 kubelet[2471]: E0212 19:24:08.757142 2471 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-config-path podName:2ee82e1b-171c-4f7d-8837-fa5e65aecc60 nodeName:}" failed. No retries permitted until 2024-02-12 19:24:09.257119561 +0000 UTC m=+203.551596465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-config-path") pod "cilium-qplkk" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:24:08.757450 kubelet[2471]: E0212 19:24:08.756724 2471 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 12 19:24:08.757618 kubelet[2471]: E0212 19:24:08.757604 2471 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-clustermesh-secrets podName:2ee82e1b-171c-4f7d-8837-fa5e65aecc60 nodeName:}" failed. No retries permitted until 2024-02-12 19:24:09.257590013 +0000 UTC m=+203.552066957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-clustermesh-secrets") pod "cilium-qplkk" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60") : failed to sync secret cache: timed out waiting for the condition Feb 12 19:24:08.906695 kubelet[2471]: E0212 19:24:08.906660 2471 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5d78c9869d-rqtn4" podUID=424959a4-582f-4261-b5c8-af3cfc085d21 Feb 12 19:24:08.999019 sshd[4186]: Accepted publickey for core from 10.200.12.6 port 49440 ssh2: RSA SHA256:SfUjs7MZHb4dm/nWS32sXF6T7NXqXtYp+5K7LVbLt6U Feb 12 19:24:09.000577 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:24:09.004829 systemd[1]: Started session-25.scope. Feb 12 19:24:09.005110 systemd-logind[1370]: New session 25 of user core. Feb 12 19:24:09.349373 env[1381]: time="2024-02-12T19:24:09.349329393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qplkk,Uid:2ee82e1b-171c-4f7d-8837-fa5e65aecc60,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:09.382513 env[1381]: time="2024-02-12T19:24:09.382414848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:09.382513 env[1381]: time="2024-02-12T19:24:09.382454086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:09.382711 env[1381]: time="2024-02-12T19:24:09.382485564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:09.382768 env[1381]: time="2024-02-12T19:24:09.382728750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc pid=4202 runtime=io.containerd.runc.v2 Feb 12 19:24:09.396719 systemd[1]: Started cri-containerd-449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc.scope. Feb 12 19:24:09.400133 systemd[1]: run-containerd-runc-k8s.io-449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc-runc.7yeRxX.mount: Deactivated successfully. Feb 12 19:24:09.423884 env[1381]: time="2024-02-12T19:24:09.423840504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qplkk,Uid:2ee82e1b-171c-4f7d-8837-fa5e65aecc60,Namespace:kube-system,Attempt:0,} returns sandbox id \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\"" Feb 12 19:24:09.427934 env[1381]: time="2024-02-12T19:24:09.427897871Z" level=info msg="CreateContainer within sandbox \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:24:09.465216 env[1381]: time="2024-02-12T19:24:09.465171086Z" level=info msg="CreateContainer within sandbox \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815\"" Feb 12 19:24:09.466740 env[1381]: time="2024-02-12T19:24:09.466704678Z" level=info msg="StartContainer for \"67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815\"" Feb 12 19:24:09.482882 systemd[1]: Started cri-containerd-67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815.scope. Feb 12 19:24:09.494240 systemd[1]: cri-containerd-67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815.scope: Deactivated successfully. Feb 12 19:24:09.556370 env[1381]: time="2024-02-12T19:24:09.556314841Z" level=info msg="shim disconnected" id=67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815 Feb 12 19:24:09.556370 env[1381]: time="2024-02-12T19:24:09.556366678Z" level=warning msg="cleaning up after shim disconnected" id=67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815 namespace=k8s.io Feb 12 19:24:09.556370 env[1381]: time="2024-02-12T19:24:09.556375717Z" level=info msg="cleaning up dead shim" Feb 12 19:24:09.563065 env[1381]: time="2024-02-12T19:24:09.563009015Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4262 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:24:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:24:09.563362 env[1381]: time="2024-02-12T19:24:09.563259881Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 12 19:24:09.567161 env[1381]: time="2024-02-12T19:24:09.567113939Z" level=error msg="Failed to pipe stdout of container \"67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815\"" error="reading from a closed fifo" Feb 12 19:24:09.567869 env[1381]: time="2024-02-12T19:24:09.567842337Z" level=error msg="Failed to pipe stderr of container \"67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815\"" error="reading from a closed fifo" Feb 12 19:24:09.572330 env[1381]: time="2024-02-12T19:24:09.572266643Z" level=error msg="StartContainer for \"67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:24:09.572705 kubelet[2471]: E0212 19:24:09.572671 2471 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815" Feb 12 19:24:09.572816 kubelet[2471]: E0212 19:24:09.572790 2471 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:24:09.572816 kubelet[2471]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:24:09.572816 kubelet[2471]: rm /hostbin/cilium-mount Feb 12 19:24:09.572890 kubelet[2471]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-p4kc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-qplkk_kube-system(2ee82e1b-171c-4f7d-8837-fa5e65aecc60): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:24:09.572890 kubelet[2471]: E0212 19:24:09.572833 2471 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qplkk" podUID=2ee82e1b-171c-4f7d-8837-fa5e65aecc60 Feb 12 19:24:10.369886 env[1381]: time="2024-02-12T19:24:10.369834749Z" level=info msg="StopPodSandbox for \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\"" Feb 12 19:24:10.370416 env[1381]: time="2024-02-12T19:24:10.369903785Z" level=info msg="Container to stop \"67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:24:10.372157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc-shm.mount: Deactivated successfully. Feb 12 19:24:10.381577 systemd[1]: cri-containerd-449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc.scope: Deactivated successfully. Feb 12 19:24:10.404229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc-rootfs.mount: Deactivated successfully. Feb 12 19:24:10.421930 env[1381]: time="2024-02-12T19:24:10.421884565Z" level=info msg="shim disconnected" id=449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc Feb 12 19:24:10.422284 env[1381]: time="2024-02-12T19:24:10.422265623Z" level=warning msg="cleaning up after shim disconnected" id=449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc namespace=k8s.io Feb 12 19:24:10.422515 env[1381]: time="2024-02-12T19:24:10.422497930Z" level=info msg="cleaning up dead shim" Feb 12 19:24:10.436175 env[1381]: time="2024-02-12T19:24:10.436119800Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4297 runtime=io.containerd.runc.v2\n" Feb 12 19:24:10.436650 env[1381]: time="2024-02-12T19:24:10.436622651Z" level=info msg="TearDown network for sandbox \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\" successfully" Feb 12 19:24:10.436745 env[1381]: time="2024-02-12T19:24:10.436728725Z" level=info msg="StopPodSandbox for \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\" returns successfully" Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.575759 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-ipsec-secrets\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576116 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4kc9\" (UniqueName: \"kubernetes.io/projected/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-kube-api-access-p4kc9\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576148 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-config-path\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576168 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-run\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576185 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-hostproc\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576236 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-cgroup\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576254 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-xtables-lock\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576274 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-hubble-tls\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576293 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-clustermesh-secrets\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576312 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-bpf-maps\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576330 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-host-proc-sys-net\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576369 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cni-path\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576375 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576388 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-etc-cni-netd\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576409 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-lib-modules\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.578508 kubelet[2471]: I0212 19:24:10.576427 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-host-proc-sys-kernel\") pod \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\" (UID: \"2ee82e1b-171c-4f7d-8837-fa5e65aecc60\") " Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.576461 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-cgroup\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.576519 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.576539 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.576807 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.576833 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.576852 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cni-path" (OuterVolumeSpecName: "cni-path") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.576867 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.576884 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.576904 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: W0212 19:24:10.577004 2471 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2ee82e1b-171c-4f7d-8837-fa5e65aecc60/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.578607 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-hostproc" (OuterVolumeSpecName: "hostproc") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.583988 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.584056 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-kube-api-access-p4kc9" (OuterVolumeSpecName: "kube-api-access-p4kc9") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "kube-api-access-p4kc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:24:10.586097 kubelet[2471]: I0212 19:24:10.585909 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:10.581628 systemd[1]: var-lib-kubelet-pods-2ee82e1b\x2d171c\x2d4f7d\x2d8837\x2dfa5e65aecc60-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:10.583204 systemd[1]: var-lib-kubelet-pods-2ee82e1b\x2d171c\x2d4f7d\x2d8837\x2dfa5e65aecc60-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4kc9.mount: Deactivated successfully. Feb 12 19:24:10.586888 kubelet[2471]: I0212 19:24:10.586867 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:24:10.587306 kubelet[2471]: I0212 19:24:10.587286 2471 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2ee82e1b-171c-4f7d-8837-fa5e65aecc60" (UID: "2ee82e1b-171c-4f7d-8837-fa5e65aecc60"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:24:10.657422 kubelet[2471]: I0212 19:24:10.657334 2471 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-f75f2c89dc" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:24:10.657286452 +0000 UTC m=+204.951763396 LastTransitionTime:2024-02-12 19:24:10.657286452 +0000 UTC m=+204.951763396 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:24:10.676916 kubelet[2471]: I0212 19:24:10.676889 2471 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-clustermesh-secrets\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677092 kubelet[2471]: I0212 19:24:10.677082 2471 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-bpf-maps\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677183 kubelet[2471]: I0212 19:24:10.677171 2471 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-host-proc-sys-net\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677257 kubelet[2471]: I0212 19:24:10.677248 2471 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cni-path\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677319 kubelet[2471]: I0212 19:24:10.677311 2471 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-etc-cni-netd\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677386 kubelet[2471]: I0212 19:24:10.677378 2471 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-lib-modules\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677448 kubelet[2471]: I0212 19:24:10.677441 2471 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677539 kubelet[2471]: I0212 19:24:10.677529 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677604 kubelet[2471]: I0212 19:24:10.677596 2471 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p4kc9\" (UniqueName: \"kubernetes.io/projected/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-kube-api-access-p4kc9\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677668 kubelet[2471]: I0212 19:24:10.677657 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-config-path\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677734 kubelet[2471]: I0212 19:24:10.677726 2471 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-hostproc\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677790 kubelet[2471]: I0212 19:24:10.677783 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-cilium-run\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677857 kubelet[2471]: I0212 19:24:10.677849 2471 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-hubble-tls\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.677955 kubelet[2471]: I0212 19:24:10.677946 2471 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ee82e1b-171c-4f7d-8837-fa5e65aecc60-xtables-lock\") on node \"ci-3510.3.2-a-f75f2c89dc\" DevicePath \"\"" Feb 12 19:24:10.906857 kubelet[2471]: E0212 19:24:10.906825 2471 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5d78c9869d-rqtn4" podUID=424959a4-582f-4261-b5c8-af3cfc085d21 Feb 12 19:24:11.077488 kubelet[2471]: E0212 19:24:11.077445 2471 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:24:11.270968 systemd[1]: var-lib-kubelet-pods-2ee82e1b\x2d171c\x2d4f7d\x2d8837\x2dfa5e65aecc60-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:24:11.271077 systemd[1]: var-lib-kubelet-pods-2ee82e1b\x2d171c\x2d4f7d\x2d8837\x2dfa5e65aecc60-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:24:11.371425 kubelet[2471]: I0212 19:24:11.371337 2471 scope.go:115] "RemoveContainer" containerID="67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815" Feb 12 19:24:11.373535 env[1381]: time="2024-02-12T19:24:11.373376917Z" level=info msg="RemoveContainer for \"67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815\"" Feb 12 19:24:11.377442 systemd[1]: Removed slice kubepods-burstable-pod2ee82e1b_171c_4f7d_8837_fa5e65aecc60.slice. Feb 12 19:24:11.386922 env[1381]: time="2024-02-12T19:24:11.386794931Z" level=info msg="RemoveContainer for \"67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815\" returns successfully" Feb 12 19:24:11.412994 kubelet[2471]: I0212 19:24:11.412960 2471 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:24:11.413202 kubelet[2471]: E0212 19:24:11.413189 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2ee82e1b-171c-4f7d-8837-fa5e65aecc60" containerName="mount-cgroup" Feb 12 19:24:11.413284 kubelet[2471]: I0212 19:24:11.413274 2471 memory_manager.go:346] "RemoveStaleState removing state" podUID="2ee82e1b-171c-4f7d-8837-fa5e65aecc60" containerName="mount-cgroup" Feb 12 19:24:11.418175 systemd[1]: Created slice kubepods-burstable-podb4e058a5_b46c_46eb_928e_f23e82ec2760.slice. Feb 12 19:24:11.583631 kubelet[2471]: I0212 19:24:11.583599 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-cilium-cgroup\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584007 kubelet[2471]: I0212 19:24:11.583994 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-cni-path\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584098 kubelet[2471]: I0212 19:24:11.584088 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-bpf-maps\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584194 kubelet[2471]: I0212 19:24:11.584184 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-lib-modules\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584278 kubelet[2471]: I0212 19:24:11.584265 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2w8g\" (UniqueName: \"kubernetes.io/projected/b4e058a5-b46c-46eb-928e-f23e82ec2760-kube-api-access-v2w8g\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584355 kubelet[2471]: I0212 19:24:11.584345 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-host-proc-sys-kernel\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584444 kubelet[2471]: I0212 19:24:11.584432 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-xtables-lock\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584546 kubelet[2471]: I0212 19:24:11.584537 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4e058a5-b46c-46eb-928e-f23e82ec2760-cilium-config-path\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584620 kubelet[2471]: I0212 19:24:11.584611 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-etc-cni-netd\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584703 kubelet[2471]: I0212 19:24:11.584694 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-hostproc\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584780 kubelet[2471]: I0212 19:24:11.584770 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4e058a5-b46c-46eb-928e-f23e82ec2760-hubble-tls\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584860 kubelet[2471]: I0212 19:24:11.584838 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4e058a5-b46c-46eb-928e-f23e82ec2760-clustermesh-secrets\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.584937 kubelet[2471]: I0212 19:24:11.584928 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-host-proc-sys-net\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.585008 kubelet[2471]: I0212 19:24:11.584999 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b4e058a5-b46c-46eb-928e-f23e82ec2760-cilium-ipsec-secrets\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.585077 kubelet[2471]: I0212 19:24:11.585068 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4e058a5-b46c-46eb-928e-f23e82ec2760-cilium-run\") pod \"cilium-x7kk4\" (UID: \"b4e058a5-b46c-46eb-928e-f23e82ec2760\") " pod="kube-system/cilium-x7kk4" Feb 12 19:24:11.721349 env[1381]: time="2024-02-12T19:24:11.720662096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7kk4,Uid:b4e058a5-b46c-46eb-928e-f23e82ec2760,Namespace:kube-system,Attempt:0,}" Feb 12 19:24:11.757534 env[1381]: time="2024-02-12T19:24:11.757308100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:24:11.757534 env[1381]: time="2024-02-12T19:24:11.757487130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:24:11.757534 env[1381]: time="2024-02-12T19:24:11.757499049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:24:11.757805 env[1381]: time="2024-02-12T19:24:11.757666680Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d pid=4326 runtime=io.containerd.runc.v2 Feb 12 19:24:11.767706 systemd[1]: Started cri-containerd-37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d.scope. Feb 12 19:24:11.790912 env[1381]: time="2024-02-12T19:24:11.790864915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7kk4,Uid:b4e058a5-b46c-46eb-928e-f23e82ec2760,Namespace:kube-system,Attempt:0,} returns sandbox id \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\"" Feb 12 19:24:11.794880 env[1381]: time="2024-02-12T19:24:11.794846733Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:24:11.830531 env[1381]: time="2024-02-12T19:24:11.830447915Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92b7357951989c0e1a3a02b2bf23f8137d266580bd67c50e3fe23a0bc7da3550\"" Feb 12 19:24:11.831801 env[1381]: time="2024-02-12T19:24:11.831775241Z" level=info msg="StartContainer for \"92b7357951989c0e1a3a02b2bf23f8137d266580bd67c50e3fe23a0bc7da3550\"" Feb 12 19:24:11.846278 systemd[1]: Started cri-containerd-92b7357951989c0e1a3a02b2bf23f8137d266580bd67c50e3fe23a0bc7da3550.scope. Feb 12 19:24:11.875279 env[1381]: time="2024-02-12T19:24:11.875218747Z" level=info msg="StartContainer for \"92b7357951989c0e1a3a02b2bf23f8137d266580bd67c50e3fe23a0bc7da3550\" returns successfully" Feb 12 19:24:11.882700 systemd[1]: cri-containerd-92b7357951989c0e1a3a02b2bf23f8137d266580bd67c50e3fe23a0bc7da3550.scope: Deactivated successfully. Feb 12 19:24:11.911130 kubelet[2471]: I0212 19:24:11.910745 2471 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2ee82e1b-171c-4f7d-8837-fa5e65aecc60 path="/var/lib/kubelet/pods/2ee82e1b-171c-4f7d-8837-fa5e65aecc60/volumes" Feb 12 19:24:11.943897 env[1381]: time="2024-02-12T19:24:11.943850252Z" level=info msg="shim disconnected" id=92b7357951989c0e1a3a02b2bf23f8137d266580bd67c50e3fe23a0bc7da3550 Feb 12 19:24:11.943897 env[1381]: time="2024-02-12T19:24:11.943895730Z" level=warning msg="cleaning up after shim disconnected" id=92b7357951989c0e1a3a02b2bf23f8137d266580bd67c50e3fe23a0bc7da3550 namespace=k8s.io Feb 12 19:24:11.944075 env[1381]: time="2024-02-12T19:24:11.943906529Z" level=info msg="cleaning up dead shim" Feb 12 19:24:11.950659 env[1381]: time="2024-02-12T19:24:11.950614596Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4408 runtime=io.containerd.runc.v2\n" Feb 12 19:24:12.381613 env[1381]: time="2024-02-12T19:24:12.381574529Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:24:12.413632 env[1381]: time="2024-02-12T19:24:12.413580741Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085\"" Feb 12 19:24:12.414254 env[1381]: time="2024-02-12T19:24:12.414225146Z" level=info msg="StartContainer for \"d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085\"" Feb 12 19:24:12.431042 systemd[1]: Started cri-containerd-d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085.scope. Feb 12 19:24:12.465540 env[1381]: time="2024-02-12T19:24:12.465484627Z" level=info msg="StartContainer for \"d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085\" returns successfully" Feb 12 19:24:12.480168 systemd[1]: cri-containerd-d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085.scope: Deactivated successfully. Feb 12 19:24:12.511651 env[1381]: time="2024-02-12T19:24:12.511607868Z" level=info msg="shim disconnected" id=d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085 Feb 12 19:24:12.511919 env[1381]: time="2024-02-12T19:24:12.511901612Z" level=warning msg="cleaning up after shim disconnected" id=d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085 namespace=k8s.io Feb 12 19:24:12.511991 env[1381]: time="2024-02-12T19:24:12.511977608Z" level=info msg="cleaning up dead shim" Feb 12 19:24:12.519231 env[1381]: time="2024-02-12T19:24:12.519182134Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4469 runtime=io.containerd.runc.v2\n" Feb 12 19:24:12.661483 kubelet[2471]: W0212 19:24:12.661360 2471 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee82e1b_171c_4f7d_8837_fa5e65aecc60.slice/cri-containerd-67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815.scope WatchSource:0}: container "67c3fc11b3ffc48b5f59a576f09c1c22bb925d763a21d95900fac67fecfc8815" in namespace "k8s.io": not found Feb 12 19:24:12.906680 kubelet[2471]: E0212 19:24:12.906629 2471 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5d78c9869d-rqtn4" podUID=424959a4-582f-4261-b5c8-af3cfc085d21 Feb 12 19:24:13.271136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085-rootfs.mount: Deactivated successfully. Feb 12 19:24:13.390422 env[1381]: time="2024-02-12T19:24:13.390296083Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:24:13.414130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955168668.mount: Deactivated successfully. Feb 12 19:24:13.419379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3051832577.mount: Deactivated successfully. Feb 12 19:24:13.430257 env[1381]: time="2024-02-12T19:24:13.430201181Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"68b31bd2769f9ea3cceb05ff53f29706ba4d8a212b355a6edc5a899c212f57e9\"" Feb 12 19:24:13.430814 env[1381]: time="2024-02-12T19:24:13.430790390Z" level=info msg="StartContainer for \"68b31bd2769f9ea3cceb05ff53f29706ba4d8a212b355a6edc5a899c212f57e9\"" Feb 12 19:24:13.451723 systemd[1]: Started cri-containerd-68b31bd2769f9ea3cceb05ff53f29706ba4d8a212b355a6edc5a899c212f57e9.scope. Feb 12 19:24:13.485307 systemd[1]: cri-containerd-68b31bd2769f9ea3cceb05ff53f29706ba4d8a212b355a6edc5a899c212f57e9.scope: Deactivated successfully. Feb 12 19:24:13.488421 env[1381]: time="2024-02-12T19:24:13.488386939Z" level=info msg="StartContainer for \"68b31bd2769f9ea3cceb05ff53f29706ba4d8a212b355a6edc5a899c212f57e9\" returns successfully" Feb 12 19:24:13.519850 env[1381]: time="2024-02-12T19:24:13.519806413Z" level=info msg="shim disconnected" id=68b31bd2769f9ea3cceb05ff53f29706ba4d8a212b355a6edc5a899c212f57e9 Feb 12 19:24:13.520136 env[1381]: time="2024-02-12T19:24:13.520118156Z" level=warning msg="cleaning up after shim disconnected" id=68b31bd2769f9ea3cceb05ff53f29706ba4d8a212b355a6edc5a899c212f57e9 namespace=k8s.io Feb 12 19:24:13.520216 env[1381]: time="2024-02-12T19:24:13.520202912Z" level=info msg="cleaning up dead shim" Feb 12 19:24:13.531092 env[1381]: time="2024-02-12T19:24:13.530695509Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4528 runtime=io.containerd.runc.v2\n" Feb 12 19:24:14.393979 env[1381]: time="2024-02-12T19:24:14.393928031Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:24:14.460543 env[1381]: time="2024-02-12T19:24:14.460461043Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb\"" Feb 12 19:24:14.461094 env[1381]: time="2024-02-12T19:24:14.461062091Z" level=info msg="StartContainer for \"d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb\"" Feb 12 19:24:14.481812 systemd[1]: Started cri-containerd-d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb.scope. Feb 12 19:24:14.507716 systemd[1]: cri-containerd-d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb.scope: Deactivated successfully. Feb 12 19:24:14.510118 env[1381]: time="2024-02-12T19:24:14.510081147Z" level=info msg="StartContainer for \"d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb\" returns successfully" Feb 12 19:24:14.544283 env[1381]: time="2024-02-12T19:24:14.544234946Z" level=info msg="shim disconnected" id=d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb Feb 12 19:24:14.544573 env[1381]: time="2024-02-12T19:24:14.544551729Z" level=warning msg="cleaning up after shim disconnected" id=d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb namespace=k8s.io Feb 12 19:24:14.544663 env[1381]: time="2024-02-12T19:24:14.544649284Z" level=info msg="cleaning up dead shim" Feb 12 19:24:14.551132 env[1381]: time="2024-02-12T19:24:14.551074665Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4584 runtime=io.containerd.runc.v2\n" Feb 12 19:24:14.907058 kubelet[2471]: E0212 19:24:14.907006 2471 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5d78c9869d-rqtn4" podUID=424959a4-582f-4261-b5c8-af3cfc085d21 Feb 12 19:24:15.271277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb-rootfs.mount: Deactivated successfully. Feb 12 19:24:15.400114 env[1381]: time="2024-02-12T19:24:15.400063185Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:24:15.435522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187735837.mount: Deactivated successfully. Feb 12 19:24:15.453183 env[1381]: time="2024-02-12T19:24:15.453134955Z" level=info msg="CreateContainer within sandbox \"37b8bcb0855476dd5196596724384adee91c7790990003c2d7359109845d080d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4fbff4b4b05fa6cd564a02a20091b427af463eddfe1481a1e60266bee98db89f\"" Feb 12 19:24:15.454037 env[1381]: time="2024-02-12T19:24:15.454001551Z" level=info msg="StartContainer for \"4fbff4b4b05fa6cd564a02a20091b427af463eddfe1481a1e60266bee98db89f\"" Feb 12 19:24:15.473358 systemd[1]: Started cri-containerd-4fbff4b4b05fa6cd564a02a20091b427af463eddfe1481a1e60266bee98db89f.scope. Feb 12 19:24:15.530432 env[1381]: time="2024-02-12T19:24:15.530315597Z" level=info msg="StartContainer for \"4fbff4b4b05fa6cd564a02a20091b427af463eddfe1481a1e60266bee98db89f\" returns successfully" Feb 12 19:24:15.771238 kubelet[2471]: W0212 19:24:15.771193 2471 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4e058a5_b46c_46eb_928e_f23e82ec2760.slice/cri-containerd-92b7357951989c0e1a3a02b2bf23f8137d266580bd67c50e3fe23a0bc7da3550.scope WatchSource:0}: task 92b7357951989c0e1a3a02b2bf23f8137d266580bd67c50e3fe23a0bc7da3550 not found: not found Feb 12 19:24:15.971497 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 12 19:24:16.416168 kubelet[2471]: I0212 19:24:16.416135 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x7kk4" podStartSLOduration=5.416087761 podCreationTimestamp="2024-02-12 19:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:24:16.415900651 +0000 UTC m=+210.710377555" watchObservedRunningTime="2024-02-12 19:24:16.416087761 +0000 UTC m=+210.710564705" Feb 12 19:24:17.521972 systemd[1]: run-containerd-runc-k8s.io-4fbff4b4b05fa6cd564a02a20091b427af463eddfe1481a1e60266bee98db89f-runc.amZFbN.mount: Deactivated successfully. Feb 12 19:24:18.508597 systemd-networkd[1532]: lxc_health: Link UP Feb 12 19:24:18.522104 systemd-networkd[1532]: lxc_health: Gained carrier Feb 12 19:24:18.522535 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:24:18.879360 kubelet[2471]: W0212 19:24:18.879316 2471 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4e058a5_b46c_46eb_928e_f23e82ec2760.slice/cri-containerd-d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085.scope WatchSource:0}: task d3887462e715bb5e83873685392578aa349abea651a4fc0727592c7620483085 not found: not found Feb 12 19:24:19.690593 systemd[1]: run-containerd-runc-k8s.io-4fbff4b4b05fa6cd564a02a20091b427af463eddfe1481a1e60266bee98db89f-runc.eCjciy.mount: Deactivated successfully. Feb 12 19:24:20.362665 systemd-networkd[1532]: lxc_health: Gained IPv6LL Feb 12 19:24:21.897156 systemd[1]: run-containerd-runc-k8s.io-4fbff4b4b05fa6cd564a02a20091b427af463eddfe1481a1e60266bee98db89f-runc.hP4sSZ.mount: Deactivated successfully. Feb 12 19:24:21.996810 kubelet[2471]: W0212 19:24:21.996757 2471 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4e058a5_b46c_46eb_928e_f23e82ec2760.slice/cri-containerd-68b31bd2769f9ea3cceb05ff53f29706ba4d8a212b355a6edc5a899c212f57e9.scope WatchSource:0}: task 68b31bd2769f9ea3cceb05ff53f29706ba4d8a212b355a6edc5a899c212f57e9 not found: not found Feb 12 19:24:24.056223 systemd[1]: run-containerd-runc-k8s.io-4fbff4b4b05fa6cd564a02a20091b427af463eddfe1481a1e60266bee98db89f-runc.BVq1qu.mount: Deactivated successfully. Feb 12 19:24:24.177180 sshd[4186]: pam_unix(sshd:session): session closed for user core Feb 12 19:24:24.180001 systemd-logind[1370]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:24:24.180670 systemd[1]: sshd@22-10.200.20.4:22-10.200.12.6:49440.service: Deactivated successfully. Feb 12 19:24:24.181390 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:24:24.182390 systemd-logind[1370]: Removed session 25. Feb 12 19:24:25.103537 kubelet[2471]: W0212 19:24:25.103498 2471 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4e058a5_b46c_46eb_928e_f23e82ec2760.slice/cri-containerd-d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb.scope WatchSource:0}: task d43bc4b41e3c9179b7ee4a282b74535bd49d1ff88ad3da93b002bc61685714eb not found: not found Feb 12 19:24:38.427045 systemd[1]: cri-containerd-3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978.scope: Deactivated successfully. Feb 12 19:24:38.427360 systemd[1]: cri-containerd-3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978.scope: Consumed 2.577s CPU time. Feb 12 19:24:38.445572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978-rootfs.mount: Deactivated successfully. Feb 12 19:24:38.486284 env[1381]: time="2024-02-12T19:24:38.486234811Z" level=info msg="shim disconnected" id=3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978 Feb 12 19:24:38.486284 env[1381]: time="2024-02-12T19:24:38.486281369Z" level=warning msg="cleaning up after shim disconnected" id=3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978 namespace=k8s.io Feb 12 19:24:38.486284 env[1381]: time="2024-02-12T19:24:38.486291449Z" level=info msg="cleaning up dead shim" Feb 12 19:24:38.493377 env[1381]: time="2024-02-12T19:24:38.493332208Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5252 runtime=io.containerd.runc.v2\n" Feb 12 19:24:38.859649 kubelet[2471]: E0212 19:24:38.859598 2471 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.4:54262->10.200.20.26:2379: read: connection timed out" Feb 12 19:24:38.864402 systemd[1]: cri-containerd-54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6.scope: Deactivated successfully. Feb 12 19:24:38.864728 systemd[1]: cri-containerd-54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6.scope: Consumed 1.943s CPU time. Feb 12 19:24:38.880964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6-rootfs.mount: Deactivated successfully. Feb 12 19:24:38.898439 env[1381]: time="2024-02-12T19:24:38.898397190Z" level=info msg="shim disconnected" id=54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6 Feb 12 19:24:38.898694 env[1381]: time="2024-02-12T19:24:38.898675220Z" level=warning msg="cleaning up after shim disconnected" id=54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6 namespace=k8s.io Feb 12 19:24:38.898755 env[1381]: time="2024-02-12T19:24:38.898743138Z" level=info msg="cleaning up dead shim" Feb 12 19:24:38.905782 env[1381]: time="2024-02-12T19:24:38.905743179Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:24:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5276 runtime=io.containerd.runc.v2\n" Feb 12 19:24:39.445841 kubelet[2471]: I0212 19:24:39.445817 2471 scope.go:115] "RemoveContainer" containerID="54a976b91ea9636d07069319665b164b53955a100f454c63ea3585c5c04912f6" Feb 12 19:24:39.449287 env[1381]: time="2024-02-12T19:24:39.449245342Z" level=info msg="CreateContainer within sandbox \"1db6740f7ba42644ffc0b48d33bfabc6e56ffa9b23f5fac200d01e2d78e4be07\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 12 19:24:39.449910 kubelet[2471]: I0212 19:24:39.449888 2471 scope.go:115] "RemoveContainer" containerID="3ceabdff883ce64d6725a66af7573b7e369f61b390f3e31dc184a1096f9bb978" Feb 12 19:24:39.452200 env[1381]: time="2024-02-12T19:24:39.452165603Z" level=info msg="CreateContainer within sandbox \"3b69b5284cb84b64ff7309c9ebe843d80585513a0a94f451b590fb34b5bbd2c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 12 19:24:39.488903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085873204.mount: Deactivated successfully. Feb 12 19:24:39.493687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266644002.mount: Deactivated successfully. Feb 12 19:24:39.518333 env[1381]: time="2024-02-12T19:24:39.518290303Z" level=info msg="CreateContainer within sandbox \"1db6740f7ba42644ffc0b48d33bfabc6e56ffa9b23f5fac200d01e2d78e4be07\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f4793e5dcdc69d304532cd38d8eba8488ea006c70fdbec034512efcf0a722433\"" Feb 12 19:24:39.519184 env[1381]: time="2024-02-12T19:24:39.519160594Z" level=info msg="StartContainer for \"f4793e5dcdc69d304532cd38d8eba8488ea006c70fdbec034512efcf0a722433\"" Feb 12 19:24:39.522645 env[1381]: time="2024-02-12T19:24:39.522586399Z" level=info msg="CreateContainer within sandbox \"3b69b5284cb84b64ff7309c9ebe843d80585513a0a94f451b590fb34b5bbd2c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"66c70515f12da8ad967a3998cf0600aa5b4d1e071954b4e615046910c6525ac4\"" Feb 12 19:24:39.523058 env[1381]: time="2024-02-12T19:24:39.523026144Z" level=info msg="StartContainer for \"66c70515f12da8ad967a3998cf0600aa5b4d1e071954b4e615046910c6525ac4\"" Feb 12 19:24:39.541614 systemd[1]: Started cri-containerd-66c70515f12da8ad967a3998cf0600aa5b4d1e071954b4e615046910c6525ac4.scope. Feb 12 19:24:39.542714 systemd[1]: Started cri-containerd-f4793e5dcdc69d304532cd38d8eba8488ea006c70fdbec034512efcf0a722433.scope. Feb 12 19:24:39.595416 env[1381]: time="2024-02-12T19:24:39.595367594Z" level=info msg="StartContainer for \"66c70515f12da8ad967a3998cf0600aa5b4d1e071954b4e615046910c6525ac4\" returns successfully" Feb 12 19:24:39.599314 env[1381]: time="2024-02-12T19:24:39.599258064Z" level=info msg="StartContainer for \"f4793e5dcdc69d304532cd38d8eba8488ea006c70fdbec034512efcf0a722433\" returns successfully" Feb 12 19:24:42.507652 kubelet[2471]: E0212 19:24:42.507540 2471 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-f75f2c89dc.17b3340332372548", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-f75f2c89dc", UID:"222a629b26639d91ba66f1699403f812", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-f75f2c89dc"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 24, 32, 35571016, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 24, 32, 35571016, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.4:54082->10.200.20.26:2379: read: connection timed out' (will not retry!) Feb 12 19:24:45.879428 env[1381]: time="2024-02-12T19:24:45.879234093Z" level=info msg="StopPodSandbox for \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\"" Feb 12 19:24:45.879428 env[1381]: time="2024-02-12T19:24:45.879325530Z" level=info msg="TearDown network for sandbox \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\" successfully" Feb 12 19:24:45.879428 env[1381]: time="2024-02-12T19:24:45.879357249Z" level=info msg="StopPodSandbox for \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\" returns successfully" Feb 12 19:24:45.880863 env[1381]: time="2024-02-12T19:24:45.880827165Z" level=info msg="RemovePodSandbox for \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\"" Feb 12 19:24:45.880948 env[1381]: time="2024-02-12T19:24:45.880852764Z" level=info msg="Forcibly stopping sandbox \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\"" Feb 12 19:24:45.880948 env[1381]: time="2024-02-12T19:24:45.880914202Z" level=info msg="TearDown network for sandbox \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\" successfully" Feb 12 19:24:45.899818 env[1381]: time="2024-02-12T19:24:45.899746877Z" level=info msg="RemovePodSandbox \"697da070cce4885e1ed2c652df137f860f10bf3d893c1bea17d372866d7f10e6\" returns successfully" Feb 12 19:24:45.900444 env[1381]: time="2024-02-12T19:24:45.900265982Z" level=info msg="StopPodSandbox for \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\"" Feb 12 19:24:45.900444 env[1381]: time="2024-02-12T19:24:45.900349739Z" level=info msg="TearDown network for sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" successfully" Feb 12 19:24:45.900444 env[1381]: time="2024-02-12T19:24:45.900381178Z" level=info msg="StopPodSandbox for \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" returns successfully" Feb 12 19:24:45.902041 env[1381]: time="2024-02-12T19:24:45.900949681Z" level=info msg="RemovePodSandbox for \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\"" Feb 12 19:24:45.902041 env[1381]: time="2024-02-12T19:24:45.900977400Z" level=info msg="Forcibly stopping sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\"" Feb 12 19:24:45.902041 env[1381]: time="2024-02-12T19:24:45.901037239Z" level=info msg="TearDown network for sandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" successfully" Feb 12 19:24:45.920997 env[1381]: time="2024-02-12T19:24:45.920949801Z" level=info msg="RemovePodSandbox \"59bdebc8fcd0c9915a781b0d7c7beb2d8033e4a63e105fe405293e9391873389\" returns successfully" Feb 12 19:24:45.921531 env[1381]: time="2024-02-12T19:24:45.921510104Z" level=info msg="StopPodSandbox for \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\"" Feb 12 19:24:45.921730 env[1381]: time="2024-02-12T19:24:45.921678579Z" level=info msg="TearDown network for sandbox \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\" successfully" Feb 12 19:24:45.921808 env[1381]: time="2024-02-12T19:24:45.921792016Z" level=info msg="StopPodSandbox for \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\" returns successfully" Feb 12 19:24:45.922117 env[1381]: time="2024-02-12T19:24:45.922088087Z" level=info msg="RemovePodSandbox for \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\"" Feb 12 19:24:45.922190 env[1381]: time="2024-02-12T19:24:45.922116766Z" level=info msg="Forcibly stopping sandbox \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\"" Feb 12 19:24:45.922190 env[1381]: time="2024-02-12T19:24:45.922174404Z" level=info msg="TearDown network for sandbox \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\" successfully" Feb 12 19:24:45.941136 env[1381]: time="2024-02-12T19:24:45.941088757Z" level=info msg="RemovePodSandbox \"449e14fe81d911b30578f6d55267a743fe982ad04ffc51ab481bdb15cf1e8ffc\" returns successfully" Feb 12 19:24:48.860843 kubelet[2471]: E0212 19:24:48.860800 2471 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-f75f2c89dc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"