Feb 9 19:24:42.173259 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:24:42.173340 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:24:42.173358 kernel: BIOS-provided physical RAM map: Feb 9 19:24:42.173372 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 9 19:24:42.173385 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 9 19:24:42.173398 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 9 19:24:42.173419 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 9 19:24:42.173433 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 9 19:24:42.173456 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 9 19:24:42.173469 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 9 19:24:42.173483 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 9 19:24:42.173496 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 9 19:24:42.173510 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 9 19:24:42.173524 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 9 19:24:42.173546 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 9 19:24:42.173562 kernel: NX (Execute Disable) protection: active Feb 9 19:24:42.173577 kernel: efi: EFI v2.70 by EDK II Feb 9 19:24:42.173593 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe379198 RNG=0xbfb73018 TPMEventLog=0xbe2bd018 Feb 9 19:24:42.173608 kernel: random: crng init done Feb 9 19:24:42.173623 kernel: SMBIOS 2.4 present. Feb 9 19:24:42.173636 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023 Feb 9 19:24:42.173649 kernel: Hypervisor detected: KVM Feb 9 19:24:42.173667 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:24:42.173681 kernel: kvm-clock: cpu 0, msr 130faa001, primary cpu clock Feb 9 19:24:42.173696 kernel: kvm-clock: using sched offset of 13761274254 cycles Feb 9 19:24:42.173713 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:24:42.173729 kernel: tsc: Detected 2299.998 MHz processor Feb 9 19:24:42.173744 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:24:42.173760 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:24:42.173775 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 9 19:24:42.173791 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:24:42.173807 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 9 19:24:42.173826 kernel: Using GB pages for direct mapping Feb 9 19:24:42.173841 kernel: Secure boot disabled Feb 9 19:24:42.173856 kernel: ACPI: Early table checksum verification disabled Feb 9 19:24:42.173871 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 9 19:24:42.173887 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 9 19:24:42.173903 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 9 19:24:42.173918 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 9 19:24:42.173934 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 9 19:24:42.173976 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Feb 9 19:24:42.173997 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 9 19:24:42.174012 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 9 19:24:42.174027 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 9 19:24:42.174044 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 9 19:24:42.174061 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 9 19:24:42.174081 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 9 19:24:42.174098 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 9 19:24:42.174115 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 9 19:24:42.174131 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 9 19:24:42.174148 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 9 19:24:42.174164 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 9 19:24:42.174401 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 9 19:24:42.174419 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 9 19:24:42.174436 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 9 19:24:42.174458 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:24:42.174613 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:24:42.174631 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 9 19:24:42.174647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 9 19:24:42.174664 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 9 19:24:42.174680 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 9 19:24:42.174826 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 9 19:24:42.174844 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 9 19:24:42.174861 kernel: Zone ranges: Feb 9 19:24:42.174882 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:24:42.174899 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:24:42.175053 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 9 19:24:42.175070 kernel: Movable zone start for each node Feb 9 19:24:42.175087 kernel: Early memory node ranges Feb 9 19:24:42.175103 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 9 19:24:42.175198 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 9 19:24:42.175216 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 9 19:24:42.175232 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 9 19:24:42.175254 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 9 19:24:42.175271 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 9 19:24:42.180149 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:24:42.180181 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 9 19:24:42.180200 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 9 19:24:42.180217 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 9 19:24:42.180233 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 9 19:24:42.180249 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:24:42.180266 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:24:42.180301 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:24:42.180325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:24:42.180341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:24:42.180355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:24:42.180371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:24:42.180385 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:24:42.180401 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:24:42.180416 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 9 19:24:42.180432 kernel: Booting paravirtualized kernel on KVM Feb 9 19:24:42.180452 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:24:42.180468 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:24:42.180485 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:24:42.180501 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:24:42.180517 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:24:42.180533 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:24:42.180549 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:24:42.180565 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1931256 Feb 9 19:24:42.180581 kernel: Policy zone: Normal Feb 9 19:24:42.180604 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:24:42.180620 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:24:42.180636 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:24:42.180651 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:24:42.180667 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:24:42.180683 kernel: Memory: 7536516K/7860584K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 323808K reserved, 0K cma-reserved) Feb 9 19:24:42.180700 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:24:42.180716 kernel: Kernel/User page tables isolation: enabled Feb 9 19:24:42.180736 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:24:42.180752 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:24:42.180768 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:24:42.180786 kernel: rcu: RCU event tracing is enabled. Feb 9 19:24:42.180802 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:24:42.180818 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:24:42.180834 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:24:42.180851 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:24:42.180867 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:24:42.180886 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:24:42.180914 kernel: Console: colour dummy device 80x25 Feb 9 19:24:42.180931 kernel: printk: console [ttyS0] enabled Feb 9 19:24:42.180967 kernel: ACPI: Core revision 20210730 Feb 9 19:24:42.180984 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:24:42.181000 kernel: x2apic enabled Feb 9 19:24:42.181017 kernel: Switched APIC routing to physical x2apic. Feb 9 19:24:42.181034 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 9 19:24:42.181052 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 9 19:24:42.181071 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 9 19:24:42.181093 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 9 19:24:42.181110 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 9 19:24:42.181126 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:24:42.181142 kernel: Spectre V2 : Mitigation: IBRS Feb 9 19:24:42.181158 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:24:42.181175 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:24:42.181195 kernel: RETBleed: Mitigation: IBRS Feb 9 19:24:42.181213 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:24:42.181230 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Feb 9 19:24:42.181247 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:24:42.181263 kernel: MDS: Mitigation: Clear CPU buffers Feb 9 19:24:42.181280 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:24:42.181359 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:24:42.181377 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:24:42.181393 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:24:42.181415 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:24:42.181433 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 19:24:42.181449 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:24:42.181464 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:24:42.181481 kernel: LSM: Security Framework initializing Feb 9 19:24:42.181498 kernel: SELinux: Initializing. Feb 9 19:24:42.181515 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:24:42.181532 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:24:42.181550 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 9 19:24:42.181571 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 9 19:24:42.181587 kernel: signal: max sigframe size: 1776 Feb 9 19:24:42.181605 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:24:42.181622 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:24:42.181639 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:24:42.181656 kernel: x86: Booting SMP configuration: Feb 9 19:24:42.181673 kernel: .... node #0, CPUs: #1 Feb 9 19:24:42.181690 kernel: kvm-clock: cpu 1, msr 130faa041, secondary cpu clock Feb 9 19:24:42.181707 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 9 19:24:42.181730 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:24:42.181747 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:24:42.181764 kernel: smpboot: Max logical packages: 1 Feb 9 19:24:42.181781 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 9 19:24:42.181799 kernel: devtmpfs: initialized Feb 9 19:24:42.181815 kernel: x86/mm: Memory block size: 128MB Feb 9 19:24:42.181833 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 9 19:24:42.181851 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:24:42.181868 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:24:42.181907 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:24:42.181924 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:24:42.181938 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:24:42.181961 kernel: audit: type=2000 audit(1707506680.693:1): state=initialized audit_enabled=0 res=1 Feb 9 19:24:42.181977 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:24:42.181994 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:24:42.182011 kernel: cpuidle: using governor menu Feb 9 19:24:42.182025 kernel: ACPI: bus type PCI registered Feb 9 19:24:42.182042 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:24:42.182062 kernel: dca service started, version 1.12.1 Feb 9 19:24:42.182080 kernel: PCI: Using configuration type 1 for base access Feb 9 19:24:42.182097 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:24:42.182114 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:24:42.182130 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:24:42.182147 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:24:42.182163 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:24:42.182179 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:24:42.182195 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:24:42.182216 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:24:42.182232 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:24:42.182248 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:24:42.182264 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 9 19:24:42.182280 kernel: ACPI: Interpreter enabled Feb 9 19:24:42.182310 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:24:42.182327 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:24:42.182343 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:24:42.182360 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 9 19:24:42.182381 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:24:42.182611 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:24:42.182778 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:24:42.182802 kernel: PCI host bridge to bus 0000:00 Feb 9 19:24:42.182981 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:24:42.183133 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:24:42.183280 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:24:42.183435 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 9 19:24:42.183776 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:24:42.184094 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:24:42.184407 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 9 19:24:42.184792 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:24:42.184960 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:24:42.185132 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 9 19:24:42.191278 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 9 19:24:42.191519 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 9 19:24:42.191705 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:24:42.191885 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 9 19:24:42.192059 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 9 19:24:42.192235 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:24:42.192430 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 19:24:42.192594 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 9 19:24:42.192617 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:24:42.192636 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:24:42.192653 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:24:42.192670 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:24:42.192688 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:24:42.192710 kernel: iommu: Default domain type: Translated Feb 9 19:24:42.192727 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:24:42.192745 kernel: vgaarb: loaded Feb 9 19:24:42.192762 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:24:42.192778 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:24:42.192796 kernel: PTP clock support registered Feb 9 19:24:42.192813 kernel: Registered efivars operations Feb 9 19:24:42.192830 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:24:42.192847 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:24:42.192868 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 9 19:24:42.192885 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 9 19:24:42.192901 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 9 19:24:42.192917 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 9 19:24:42.192931 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:24:42.192948 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:24:42.193003 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:24:42.193021 kernel: pnp: PnP ACPI init Feb 9 19:24:42.193038 kernel: pnp: PnP ACPI: found 7 devices Feb 9 19:24:42.193060 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:24:42.193078 kernel: NET: Registered PF_INET protocol family Feb 9 19:24:42.193094 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:24:42.193111 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:24:42.193129 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:24:42.193146 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:24:42.193164 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:24:42.193181 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:24:42.193199 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:24:42.193219 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:24:42.193237 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:24:42.193254 kernel: NET: Registered PF_XDP protocol family Feb 9 19:24:42.193668 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:24:42.194085 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:24:42.194387 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:24:42.194652 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 9 19:24:42.194824 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:24:42.194853 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:24:42.194873 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:24:42.194891 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Feb 9 19:24:42.194909 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:24:42.194927 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 9 19:24:42.194945 kernel: clocksource: Switched to clocksource tsc Feb 9 19:24:42.194969 kernel: Initialise system trusted keyrings Feb 9 19:24:42.194987 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:24:42.195009 kernel: Key type asymmetric registered Feb 9 19:24:42.195026 kernel: Asymmetric key parser 'x509' registered Feb 9 19:24:42.195043 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:24:42.195061 kernel: io scheduler mq-deadline registered Feb 9 19:24:42.195078 kernel: io scheduler kyber registered Feb 9 19:24:42.195095 kernel: io scheduler bfq registered Feb 9 19:24:42.195113 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:24:42.195131 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:24:42.203548 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 9 19:24:42.203594 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 19:24:42.203790 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 9 19:24:42.203815 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:24:42.203977 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 9 19:24:42.204001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:24:42.204018 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:24:42.204035 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:24:42.204052 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 9 19:24:42.204069 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 9 19:24:42.204252 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 9 19:24:42.204276 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:24:42.204312 kernel: i8042: Warning: Keylock active Feb 9 19:24:42.204328 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:24:42.204346 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:24:42.204520 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 9 19:24:42.204673 kernel: rtc_cmos 00:00: registered as rtc0 Feb 9 19:24:42.204831 kernel: rtc_cmos 00:00: setting system clock to 2024-02-09T19:24:41 UTC (1707506681) Feb 9 19:24:42.204979 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 9 19:24:42.205002 kernel: intel_pstate: CPU model not supported Feb 9 19:24:42.205020 kernel: pstore: Registered efi as persistent store backend Feb 9 19:24:42.205038 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:24:42.205056 kernel: Segment Routing with IPv6 Feb 9 19:24:42.205074 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:24:42.205092 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:24:42.205110 kernel: Key type dns_resolver registered Feb 9 19:24:42.205133 kernel: IPI shorthand broadcast: enabled Feb 9 19:24:42.205151 kernel: sched_clock: Marking stable (789221133, 195728004)->(1088206404, -103257267) Feb 9 19:24:42.205169 kernel: registered taskstats version 1 Feb 9 19:24:42.205186 kernel: Loading compiled-in X.509 certificates Feb 9 19:24:42.205204 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:24:42.205222 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:24:42.205239 kernel: Key type .fscrypt registered Feb 9 19:24:42.205257 kernel: Key type fscrypt-provisioning registered Feb 9 19:24:42.205274 kernel: pstore: Using crash dump compression: deflate Feb 9 19:24:42.205308 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:24:42.205323 kernel: ima: No architecture policies found Feb 9 19:24:42.205339 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:24:42.205355 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:24:42.205371 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:24:42.205395 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:24:42.205411 kernel: Run /init as init process Feb 9 19:24:42.205426 kernel: with arguments: Feb 9 19:24:42.205447 kernel: /init Feb 9 19:24:42.205462 kernel: with environment: Feb 9 19:24:42.205477 kernel: HOME=/ Feb 9 19:24:42.205493 kernel: TERM=linux Feb 9 19:24:42.205509 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:24:42.205529 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:24:42.205549 systemd[1]: Detected virtualization kvm. Feb 9 19:24:42.205566 systemd[1]: Detected architecture x86-64. Feb 9 19:24:42.205585 systemd[1]: Running in initrd. Feb 9 19:24:42.205602 systemd[1]: No hostname configured, using default hostname. Feb 9 19:24:42.205618 systemd[1]: Hostname set to . Feb 9 19:24:42.205635 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:24:42.205652 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:24:42.205668 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:24:42.205685 systemd[1]: Reached target cryptsetup.target. Feb 9 19:24:42.205702 systemd[1]: Reached target paths.target. Feb 9 19:24:42.205722 systemd[1]: Reached target slices.target. Feb 9 19:24:42.205739 systemd[1]: Reached target swap.target. Feb 9 19:24:42.205755 systemd[1]: Reached target timers.target. Feb 9 19:24:42.205773 systemd[1]: Listening on iscsid.socket. Feb 9 19:24:42.205790 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:24:42.205807 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:24:42.205823 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:24:42.205843 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:24:42.205860 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:24:42.205877 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:24:42.205893 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:24:42.205908 systemd[1]: Reached target sockets.target. Feb 9 19:24:42.205925 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:24:42.205942 systemd[1]: Finished network-cleanup.service. Feb 9 19:24:42.205959 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:24:42.205976 systemd[1]: Starting systemd-journald.service... Feb 9 19:24:42.205995 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:24:42.206013 systemd[1]: Starting systemd-resolved.service... Feb 9 19:24:42.206030 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:24:42.206063 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:24:42.206084 kernel: audit: type=1130 audit(1707506682.174:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.206102 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:24:42.206120 kernel: audit: type=1130 audit(1707506682.184:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.206140 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:24:42.206158 kernel: audit: type=1130 audit(1707506682.194:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.206175 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:24:42.206197 systemd-journald[189]: Journal started Feb 9 19:24:42.207178 systemd-journald[189]: Runtime Journal (/run/log/journal/dbf5c35a97ed51c06342f64257bbe0a4) is 8.0M, max 148.8M, 140.8M free. Feb 9 19:24:42.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.207725 systemd-modules-load[190]: Inserted module 'overlay' Feb 9 19:24:42.224118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:24:42.224180 systemd[1]: Started systemd-journald.service. Feb 9 19:24:42.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.233320 kernel: audit: type=1130 audit(1707506682.225:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.236086 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:24:42.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.248331 kernel: audit: type=1130 audit(1707506682.234:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.248235 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:24:42.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.253783 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:24:42.269474 kernel: audit: type=1130 audit(1707506682.251:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.276518 systemd-resolved[191]: Positive Trust Anchors: Feb 9 19:24:42.276944 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:24:42.277011 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:24:42.309261 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:24:42.309316 kernel: Bridge firewalling registered Feb 9 19:24:42.284356 systemd-resolved[191]: Defaulting to hostname 'linux'. Feb 9 19:24:42.361578 kernel: audit: type=1130 audit(1707506682.316:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.361620 kernel: SCSI subsystem initialized Feb 9 19:24:42.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.361711 dracut-cmdline[205]: dracut-dracut-053 Feb 9 19:24:42.361711 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:24:42.448480 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:24:42.448527 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:24:42.448552 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:24:42.448575 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:24:42.448597 kernel: iscsi: registered transport (tcp) Feb 9 19:24:42.286544 systemd[1]: Started systemd-resolved.service. Feb 9 19:24:42.304147 systemd-modules-load[190]: Inserted module 'br_netfilter' Feb 9 19:24:42.317567 systemd[1]: Reached target nss-lookup.target. Feb 9 19:24:42.486186 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:24:42.486224 kernel: QLogic iSCSI HBA Driver Feb 9 19:24:42.397728 systemd-modules-load[190]: Inserted module 'dm_multipath' Feb 9 19:24:42.531462 kernel: audit: type=1130 audit(1707506682.493:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.398796 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:24:42.568516 kernel: audit: type=1130 audit(1707506682.538:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.496123 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:24:42.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:42.529758 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:24:42.562046 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:24:42.579408 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:24:42.643339 kernel: raid6: avx2x4 gen() 18317 MB/s Feb 9 19:24:42.664416 kernel: raid6: avx2x4 xor() 7391 MB/s Feb 9 19:24:42.685379 kernel: raid6: avx2x2 gen() 17992 MB/s Feb 9 19:24:42.706364 kernel: raid6: avx2x2 xor() 18190 MB/s Feb 9 19:24:42.727395 kernel: raid6: avx2x1 gen() 15108 MB/s Feb 9 19:24:42.748397 kernel: raid6: avx2x1 xor() 15624 MB/s Feb 9 19:24:42.769367 kernel: raid6: sse2x4 gen() 11068 MB/s Feb 9 19:24:42.790362 kernel: raid6: sse2x4 xor() 6615 MB/s Feb 9 19:24:42.811336 kernel: raid6: sse2x2 gen() 11124 MB/s Feb 9 19:24:42.832323 kernel: raid6: sse2x2 xor() 7452 MB/s Feb 9 19:24:42.853358 kernel: raid6: sse2x1 gen() 10378 MB/s Feb 9 19:24:42.879970 kernel: raid6: sse2x1 xor() 4967 MB/s Feb 9 19:24:42.880071 kernel: raid6: using algorithm avx2x4 gen() 18317 MB/s Feb 9 19:24:42.880095 kernel: raid6: .... xor() 7391 MB/s, rmw enabled Feb 9 19:24:42.885360 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:24:42.911338 kernel: xor: automatically using best checksumming function avx Feb 9 19:24:43.022331 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:24:43.034625 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:24:43.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:43.042000 audit: BPF prog-id=7 op=LOAD Feb 9 19:24:43.043000 audit: BPF prog-id=8 op=LOAD Feb 9 19:24:43.045093 systemd[1]: Starting systemd-udevd.service... Feb 9 19:24:43.061760 systemd-udevd[387]: Using default interface naming scheme 'v252'. Feb 9 19:24:43.069530 systemd[1]: Started systemd-udevd.service. Feb 9 19:24:43.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:43.084078 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:24:43.100770 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Feb 9 19:24:43.140797 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:24:43.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:43.142310 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:24:43.211616 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:24:43.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:43.291320 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:24:43.382334 kernel: scsi host0: Virtio SCSI HBA Feb 9 19:24:43.382485 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:24:43.428276 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 9 19:24:43.428468 kernel: AES CTR mode by8 optimization enabled Feb 9 19:24:43.514985 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 9 19:24:43.515371 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 9 19:24:43.515591 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 9 19:24:43.520159 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 9 19:24:43.520491 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 19:24:43.550060 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:24:43.550148 kernel: GPT:17805311 != 25165823 Feb 9 19:24:43.550170 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:24:43.556209 kernel: GPT:17805311 != 25165823 Feb 9 19:24:43.559900 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:24:43.565485 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:24:43.573364 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 9 19:24:43.644322 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (438) Feb 9 19:24:43.646095 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:24:43.666192 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:24:43.671091 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:24:43.704460 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:24:43.726101 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:24:43.727339 systemd[1]: Starting disk-uuid.service... Feb 9 19:24:43.749889 disk-uuid[514]: Primary Header is updated. Feb 9 19:24:43.749889 disk-uuid[514]: Secondary Entries is updated. Feb 9 19:24:43.749889 disk-uuid[514]: Secondary Header is updated. Feb 9 19:24:43.798496 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:24:43.798538 kernel: GPT:disk_guids don't match. Feb 9 19:24:43.798583 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:24:43.798602 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:24:43.808322 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:24:44.799318 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:24:44.799394 disk-uuid[515]: The operation has completed successfully. Feb 9 19:24:44.865001 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:24:44.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:44.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:44.865134 systemd[1]: Finished disk-uuid.service. Feb 9 19:24:44.889472 systemd[1]: Starting verity-setup.service... Feb 9 19:24:44.919335 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:24:45.014824 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:24:45.029802 systemd[1]: Finished verity-setup.service. Feb 9 19:24:45.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.031192 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:24:45.137338 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:24:45.138383 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:24:45.138691 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:24:45.187475 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:24:45.187517 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:24:45.187540 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:24:45.139601 systemd[1]: Starting ignition-setup.service... Feb 9 19:24:45.201456 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:24:45.152702 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:24:45.236256 systemd[1]: Finished ignition-setup.service. Feb 9 19:24:45.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.238132 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:24:45.271399 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:24:45.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.286000 audit: BPF prog-id=9 op=LOAD Feb 9 19:24:45.288419 systemd[1]: Starting systemd-networkd.service... Feb 9 19:24:45.323795 systemd-networkd[689]: lo: Link UP Feb 9 19:24:45.323805 systemd-networkd[689]: lo: Gained carrier Feb 9 19:24:45.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.324798 systemd-networkd[689]: Enumeration completed Feb 9 19:24:45.324964 systemd[1]: Started systemd-networkd.service. Feb 9 19:24:45.325447 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:24:45.328003 systemd-networkd[689]: eth0: Link UP Feb 9 19:24:45.328011 systemd-networkd[689]: eth0: Gained carrier Feb 9 19:24:45.338454 systemd-networkd[689]: eth0: DHCPv4 address 10.128.0.66/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 9 19:24:45.340710 systemd[1]: Reached target network.target. Feb 9 19:24:45.348819 systemd[1]: Starting iscsiuio.service... Feb 9 19:24:45.429599 systemd[1]: Started iscsiuio.service. Feb 9 19:24:45.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.437811 systemd[1]: Starting iscsid.service... Feb 9 19:24:45.450609 iscsid[699]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:24:45.450609 iscsid[699]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:24:45.450609 iscsid[699]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:24:45.450609 iscsid[699]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:24:45.450609 iscsid[699]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:24:45.450609 iscsid[699]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:24:45.450609 iscsid[699]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:24:45.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.554980 ignition[663]: Ignition 2.14.0 Feb 9 19:24:45.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.458866 systemd[1]: Started iscsid.service. Feb 9 19:24:45.554996 ignition[663]: Stage: fetch-offline Feb 9 19:24:45.473050 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:24:45.555078 ignition[663]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:45.510840 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:24:45.555119 ignition[663]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:24:45.519632 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:24:45.591031 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:24:45.548492 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:24:45.591241 ignition[663]: parsed url from cmdline: "" Feb 9 19:24:45.557534 systemd[1]: Reached target remote-fs.target. Feb 9 19:24:45.591246 ignition[663]: no config URL provided Feb 9 19:24:45.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.567812 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:24:45.591254 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:24:45.598330 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:24:45.591267 ignition[663]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:24:45.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.620931 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:24:45.591277 ignition[663]: failed to fetch config: resource requires networking Feb 9 19:24:45.642077 systemd[1]: Starting ignition-fetch.service... Feb 9 19:24:45.591628 ignition[663]: Ignition finished successfully Feb 9 19:24:45.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:45.730852 unknown[714]: fetched base config from "system" Feb 9 19:24:45.654651 ignition[714]: Ignition 2.14.0 Feb 9 19:24:45.730866 unknown[714]: fetched base config from "system" Feb 9 19:24:45.654662 ignition[714]: Stage: fetch Feb 9 19:24:45.730877 unknown[714]: fetched user config from "gcp" Feb 9 19:24:45.654814 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:45.733988 systemd[1]: Finished ignition-fetch.service. Feb 9 19:24:45.654848 ignition[714]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:24:45.746820 systemd[1]: Starting ignition-kargs.service... Feb 9 19:24:45.663936 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:24:45.778775 systemd[1]: Finished ignition-kargs.service. Feb 9 19:24:45.664159 ignition[714]: parsed url from cmdline: "" Feb 9 19:24:45.796868 systemd[1]: Starting ignition-disks.service... Feb 9 19:24:45.664167 ignition[714]: no config URL provided Feb 9 19:24:45.822165 systemd[1]: Finished ignition-disks.service. Feb 9 19:24:45.664174 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:24:45.844924 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:24:45.664186 ignition[714]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:24:45.861702 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:24:45.664224 ignition[714]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 9 19:24:45.877613 systemd[1]: Reached target local-fs.target. Feb 9 19:24:45.673911 ignition[714]: GET result: OK Feb 9 19:24:45.890666 systemd[1]: Reached target sysinit.target. Feb 9 19:24:45.674042 ignition[714]: parsing config with SHA512: 0bcf0904e47ff40fb599750b8247be12d1379579c833dac85760b78c7d3b31b158bb92b4fcac90dcfe3b0499a60db044993d9ef21f005415dc6ba612a9707485 Feb 9 19:24:45.890779 systemd[1]: Reached target basic.target. Feb 9 19:24:45.732119 ignition[714]: fetch: fetch complete Feb 9 19:24:45.913094 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:24:45.732126 ignition[714]: fetch: fetch passed Feb 9 19:24:45.732183 ignition[714]: Ignition finished successfully Feb 9 19:24:45.760513 ignition[720]: Ignition 2.14.0 Feb 9 19:24:45.760538 ignition[720]: Stage: kargs Feb 9 19:24:45.760898 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:45.760950 ignition[720]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:24:45.768664 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:24:45.770357 ignition[720]: kargs: kargs passed Feb 9 19:24:45.770410 ignition[720]: Ignition finished successfully Feb 9 19:24:45.809138 ignition[726]: Ignition 2.14.0 Feb 9 19:24:45.809148 ignition[726]: Stage: disks Feb 9 19:24:45.809271 ignition[726]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:45.809471 ignition[726]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:24:45.817076 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:24:45.818844 ignition[726]: disks: disks passed Feb 9 19:24:45.818901 ignition[726]: Ignition finished successfully Feb 9 19:24:45.963264 systemd-fsck[734]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:24:46.138264 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:24:46.178732 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 19:24:46.178784 kernel: audit: type=1130 audit(1707506686.137:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:46.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:46.139767 systemd[1]: Mounting sysroot.mount... Feb 9 19:24:46.202548 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:24:46.196769 systemd[1]: Mounted sysroot.mount. Feb 9 19:24:46.209848 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:24:46.229846 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:24:46.241276 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:24:46.241386 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:24:46.241435 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:24:46.257938 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:24:46.343621 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (740) Feb 9 19:24:46.343670 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:24:46.343692 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:24:46.343715 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:24:46.284520 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:24:46.363579 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:24:46.337528 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:24:46.380529 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:24:46.360995 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:24:46.399513 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:24:46.409433 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:24:46.419445 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:24:46.448339 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:24:46.483505 kernel: audit: type=1130 audit(1707506686.447:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:46.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:46.449815 systemd[1]: Starting ignition-mount.service... Feb 9 19:24:46.491985 systemd[1]: Starting sysroot-boot.service... Feb 9 19:24:46.505724 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:24:46.505893 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:24:46.532452 ignition[805]: INFO : Ignition 2.14.0 Feb 9 19:24:46.532452 ignition[805]: INFO : Stage: mount Feb 9 19:24:46.532452 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:46.532452 ignition[805]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:24:46.631475 kernel: audit: type=1130 audit(1707506686.539:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:46.631522 kernel: audit: type=1130 audit(1707506686.589:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:46.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:46.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:46.537701 systemd[1]: Finished sysroot-boot.service. Feb 9 19:24:46.645463 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:24:46.645463 ignition[805]: INFO : mount: mount passed Feb 9 19:24:46.645463 ignition[805]: INFO : Ignition finished successfully Feb 9 19:24:46.710418 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (816) Feb 9 19:24:46.710456 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:24:46.710471 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:24:46.710486 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:24:46.710499 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:24:46.540917 systemd[1]: Finished ignition-mount.service. Feb 9 19:24:46.591990 systemd[1]: Starting ignition-files.service... Feb 9 19:24:46.642685 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:24:46.702657 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:24:46.764524 ignition[835]: INFO : Ignition 2.14.0 Feb 9 19:24:46.764524 ignition[835]: INFO : Stage: files Feb 9 19:24:46.778453 ignition[835]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:46.778453 ignition[835]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:24:46.778453 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:24:46.778453 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:24:46.831469 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:24:46.831469 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:24:46.831469 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:24:46.831469 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:24:46.831469 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:24:46.831469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:24:46.831469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:24:46.794155 unknown[835]: wrote ssh authorized keys file for user: core Feb 9 19:24:47.079673 systemd-networkd[689]: eth0: Gained IPv6LL Feb 9 19:24:47.117248 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:24:47.351879 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:24:47.376499 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:24:47.376499 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:24:47.376499 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:24:47.486028 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:24:47.597624 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:24:47.626449 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (838) Feb 9 19:24:47.624463 systemd[1]: mnt-oem106773153.mount: Deactivated successfully. Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem106773153" Feb 9 19:24:47.635480 ignition[835]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem106773153": device or resource busy Feb 9 19:24:47.635480 ignition[835]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem106773153", trying btrfs: device or resource busy Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem106773153" Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem106773153" Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem106773153" Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem106773153" Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:24:47.635480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:24:47.836473 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:24:47.907446 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:24:47.931484 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:24:47.931484 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:24:47.931484 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:24:47.931484 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:24:47.931484 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:24:48.013531 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 19:24:48.690036 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(b): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2247903881" Feb 9 19:24:48.714469 ignition[835]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2247903881": device or resource busy Feb 9 19:24:48.714469 ignition[835]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2247903881", trying btrfs: device or resource busy Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2247903881" Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2247903881" Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem2247903881" Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem2247903881" Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:24:48.714469 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:24:48.707732 systemd[1]: mnt-oem2247903881.mount: Deactivated successfully. Feb 9 19:24:48.944494 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK Feb 9 19:24:48.963606 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(10): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:24:48.988480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:24:48.988480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:24:48.988480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(11): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:24:48.988480 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(11): GET result: OK Feb 9 19:24:49.218471 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(11): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:24:49.218471 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:24:49.258460 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1472186967" Feb 9 19:24:49.258460 ignition[835]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1472186967": device or resource busy Feb 9 19:24:49.708489 kernel: audit: type=1130 audit(1707506689.298:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.708542 kernel: audit: type=1130 audit(1707506689.416:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.708568 kernel: audit: type=1130 audit(1707506689.476:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.708592 kernel: audit: type=1131 audit(1707506689.476:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.708623 kernel: audit: type=1130 audit(1707506689.593:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.708645 kernel: audit: type=1131 audit(1707506689.593:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.239334 systemd[1]: mnt-oem1472186967.mount: Deactivated successfully. Feb 9 19:24:49.724505 ignition[835]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1472186967", trying btrfs: device or resource busy Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1472186967" Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1472186967" Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem1472186967" Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem1472186967" Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(1c): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2985704503" Feb 9 19:24:49.724505 ignition[835]: CRITICAL : files: createFilesystemsFiles: createFiles: op(1c): op(1d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2985704503": device or resource busy Feb 9 19:24:49.724505 ignition[835]: ERROR : files: createFilesystemsFiles: createFiles: op(1c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2985704503", trying btrfs: device or resource busy Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2985704503" Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2985704503" Feb 9 19:24:49.724505 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1f): [started] unmounting "/mnt/oem2985704503" Feb 9 19:24:49.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.275934 systemd[1]: Finished ignition-files.service. Feb 9 19:24:50.037516 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1f): [finished] unmounting "/mnt/oem2985704503" Feb 9 19:24:50.037516 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(20): [started] processing unit "oem-gce-enable-oslogin.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(20): [finished] processing unit "oem-gce-enable-oslogin.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(21): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(21): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(22): [started] processing unit "oem-gce.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(22): [finished] processing unit "oem-gce.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(23): [started] processing unit "containerd.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(23): op(24): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(23): op(24): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(23): [finished] processing unit "containerd.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(25): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(25): op(26): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(25): op(26): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(25): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(27): [started] processing unit "prepare-critools.service" Feb 9 19:24:50.037516 ignition[835]: INFO : files: op(27): op(28): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:24:50.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.310209 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:24:50.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.402597 iscsid[699]: iscsid shutting down. Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(27): op(28): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(27): [finished] processing unit "prepare-critools.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(29): [started] processing unit "prepare-helm.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(29): op(2a): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(29): op(2a): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(29): [finished] processing unit "prepare-helm.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2c): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2c): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2d): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2d): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2e): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2e): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(2f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(30): [started] setting preset to enabled for "oem-gce.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: op(30): [finished] setting preset to enabled for "oem-gce.service" Feb 9 19:24:50.418589 ignition[835]: INFO : files: createResultFile: createFiles: op(31): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:24:50.418589 ignition[835]: INFO : files: createResultFile: createFiles: op(31): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:24:50.418589 ignition[835]: INFO : files: files passed Feb 9 19:24:50.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.815148 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:24:50.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.335681 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:24:50.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.866710 ignition[835]: INFO : Ignition finished successfully Feb 9 19:24:49.336756 systemd[1]: Starting ignition-quench.service... Feb 9 19:24:49.379886 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:24:49.418273 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:24:49.418517 systemd[1]: Finished ignition-quench.service. Feb 9 19:24:50.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.477906 systemd[1]: Reached target ignition-complete.target. Feb 9 19:24:50.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.954000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:24:49.558519 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:24:50.979610 ignition[873]: INFO : Ignition 2.14.0 Feb 9 19:24:50.979610 ignition[873]: INFO : Stage: umount Feb 9 19:24:50.979610 ignition[873]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:50.979610 ignition[873]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 9 19:24:50.979610 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 9 19:24:50.979610 ignition[873]: INFO : umount: umount passed Feb 9 19:24:50.979610 ignition[873]: INFO : Ignition finished successfully Feb 9 19:24:50.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.588515 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:24:51.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.588739 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:24:49.594832 systemd[1]: Reached target initrd-fs.target. Feb 9 19:24:49.660703 systemd[1]: Reached target initrd.target. Feb 9 19:24:51.177684 kernel: kauditd_printk_skb: 28 callbacks suppressed Feb 9 19:24:51.177720 kernel: audit: type=1131 audit(1707506691.143:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.683753 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:24:51.218500 kernel: audit: type=1131 audit(1707506691.185:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.685044 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:24:51.256502 kernel: audit: type=1131 audit(1707506691.226:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.717876 systemd[1]: mnt-oem2985704503.mount: Deactivated successfully. Feb 9 19:24:49.718446 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:24:51.310702 kernel: audit: type=1131 audit(1707506691.281:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.734723 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:24:51.347551 kernel: audit: type=1131 audit(1707506691.318:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.798515 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:24:51.405523 kernel: audit: type=1130 audit(1707506691.354:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.405569 kernel: audit: type=1131 audit(1707506691.354:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.807886 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:24:49.850850 systemd[1]: Stopped target timers.target. Feb 9 19:24:49.891788 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:24:51.473928 kernel: audit: type=1334 audit(1707506691.435:78): prog-id=8 op=UNLOAD Feb 9 19:24:51.474001 kernel: audit: type=1334 audit(1707506691.435:79): prog-id=7 op=UNLOAD Feb 9 19:24:51.474028 kernel: audit: type=1334 audit(1707506691.444:80): prog-id=5 op=UNLOAD Feb 9 19:24:51.474052 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Feb 9 19:24:51.435000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:24:51.435000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:24:51.444000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:24:51.444000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:24:51.444000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:24:49.891982 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:24:51.494606 systemd-journald[189]: Failed to send stream file descriptor to service manager: Connection refused Feb 9 19:24:49.909114 systemd[1]: Stopped target initrd.target. Feb 9 19:24:49.934866 systemd[1]: Stopped target basic.target. Feb 9 19:24:49.962107 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:24:49.992800 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:24:50.030780 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:24:50.045921 systemd[1]: Stopped target remote-fs.target. Feb 9 19:24:50.062009 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:24:50.094912 systemd[1]: Stopped target sysinit.target. Feb 9 19:24:50.107937 systemd[1]: Stopped target local-fs.target. Feb 9 19:24:50.127871 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:24:50.163827 systemd[1]: Stopped target swap.target. Feb 9 19:24:50.174763 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:24:50.174950 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:24:50.192033 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:24:50.208770 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:24:50.208985 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:24:50.234155 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:24:50.234376 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:24:50.278909 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:24:50.279088 systemd[1]: Stopped ignition-files.service. Feb 9 19:24:50.293408 systemd[1]: Stopping ignition-mount.service... Feb 9 19:24:50.351069 systemd[1]: Stopping iscsid.service... Feb 9 19:24:50.362686 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:24:50.362912 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:24:50.394515 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:24:50.410597 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:24:50.410916 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:24:50.427758 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:24:50.427962 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:24:50.458022 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:24:50.458894 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:24:50.459049 systemd[1]: Stopped iscsid.service. Feb 9 19:24:50.473348 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:24:50.473463 systemd[1]: Stopped ignition-mount.service. Feb 9 19:24:50.484266 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:24:50.484414 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:24:50.508530 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:24:50.508797 systemd[1]: Stopped ignition-disks.service. Feb 9 19:24:50.538605 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:24:50.538693 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:24:50.559607 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:24:50.559691 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:24:50.581614 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:24:50.581696 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:24:50.603675 systemd[1]: Stopped target paths.target. Feb 9 19:24:50.616678 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:24:50.620404 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:24:50.644492 systemd[1]: Stopped target slices.target. Feb 9 19:24:50.662474 systemd[1]: Stopped target sockets.target. Feb 9 19:24:50.680616 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:24:50.680668 systemd[1]: Closed iscsid.socket. Feb 9 19:24:50.712624 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:24:50.712705 systemd[1]: Stopped ignition-setup.service. Feb 9 19:24:50.746861 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:24:50.746961 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:24:50.759860 systemd[1]: Stopping iscsiuio.service... Feb 9 19:24:50.799013 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:24:50.799136 systemd[1]: Stopped iscsiuio.service. Feb 9 19:24:50.828034 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:24:50.828145 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:24:50.850720 systemd[1]: Stopped target network.target. Feb 9 19:24:50.874617 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:24:50.874681 systemd[1]: Closed iscsiuio.socket. Feb 9 19:24:50.889719 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:24:50.893380 systemd-networkd[689]: eth0: DHCPv6 lease lost Feb 9 19:24:51.496000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:24:50.903760 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:24:50.921031 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:24:50.921157 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:24:50.939331 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:24:50.939468 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:24:50.955630 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:24:50.955677 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:24:50.972654 systemd[1]: Stopping network-cleanup.service... Feb 9 19:24:50.986414 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:24:50.986634 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:24:50.993790 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:24:50.993857 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:24:51.024783 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:24:51.024850 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:24:51.062010 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:24:51.077137 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:24:51.077830 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:24:51.077980 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:24:51.095894 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:24:51.096007 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:24:51.111674 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:24:51.111757 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:24:51.128519 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:24:51.128622 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:24:51.172109 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:24:51.172191 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:24:51.186596 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:24:51.186688 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:24:51.229003 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:24:51.264456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:24:51.264586 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:24:51.283472 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:24:51.283630 systemd[1]: Stopped network-cleanup.service. Feb 9 19:24:51.320423 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:24:51.320585 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:24:51.355881 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:24:51.414758 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:24:51.431835 systemd[1]: Switching root. Feb 9 19:24:51.499630 systemd-journald[189]: Journal stopped Feb 9 19:24:56.360943 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:24:56.361043 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:24:56.361073 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:24:56.361096 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:24:56.361117 kernel: SELinux: policy capability open_perms=1 Feb 9 19:24:56.361138 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:24:56.361171 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:24:56.361192 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:24:56.361214 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:24:56.361242 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:24:56.361264 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:24:56.361286 systemd[1]: Successfully loaded SELinux policy in 112.743ms. Feb 9 19:24:56.361359 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.764ms. Feb 9 19:24:56.361396 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:24:56.361424 systemd[1]: Detected virtualization kvm. Feb 9 19:24:56.361448 systemd[1]: Detected architecture x86-64. Feb 9 19:24:56.361472 systemd[1]: Detected first boot. Feb 9 19:24:56.361498 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:24:56.361521 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:24:56.361544 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:24:56.361572 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:24:56.361601 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:24:56.361630 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:24:56.361659 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:24:56.361683 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:24:56.361707 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:24:56.361731 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:24:56.361753 systemd[1]: Created slice system-getty.slice. Feb 9 19:24:56.361776 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:24:56.361799 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:24:56.361826 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:24:56.361852 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:24:56.361875 systemd[1]: Created slice user.slice. Feb 9 19:24:56.361898 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:24:56.361920 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:24:56.361943 systemd[1]: Set up automount boot.automount. Feb 9 19:24:56.361966 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:24:56.361990 systemd[1]: Reached target integritysetup.target. Feb 9 19:24:56.362015 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:24:56.362041 systemd[1]: Reached target remote-fs.target. Feb 9 19:24:56.362065 systemd[1]: Reached target slices.target. Feb 9 19:24:56.362087 systemd[1]: Reached target swap.target. Feb 9 19:24:56.362110 systemd[1]: Reached target torcx.target. Feb 9 19:24:56.362133 systemd[1]: Reached target veritysetup.target. Feb 9 19:24:56.362156 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:24:56.362180 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:24:56.362202 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:24:56.362228 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:24:56.362251 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:24:56.362275 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:24:56.362309 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:24:56.362333 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:24:56.362355 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:24:56.362383 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:24:56.362407 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:24:56.362429 systemd[1]: Mounting media.mount... Feb 9 19:24:56.362452 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:24:56.362480 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:24:56.362503 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:24:56.362529 systemd[1]: Mounting tmp.mount... Feb 9 19:24:56.362553 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:24:56.362576 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:24:56.362599 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:24:56.362623 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:24:56.362646 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:24:56.362670 systemd[1]: Starting modprobe@drm.service... Feb 9 19:24:56.362696 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:24:56.362720 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:24:56.362755 systemd[1]: Starting modprobe@loop.service... Feb 9 19:24:56.362781 kernel: fuse: init (API version 7.34) Feb 9 19:24:56.362807 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:24:56.362831 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:24:56.362853 kernel: loop: module loaded Feb 9 19:24:56.362875 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:24:56.362898 systemd[1]: Starting systemd-journald.service... Feb 9 19:24:56.362920 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:24:56.362943 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:24:56.362965 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 9 19:24:56.362988 kernel: audit: type=1305 audit(1707506696.357:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:24:56.363020 systemd-journald[1035]: Journal started Feb 9 19:24:56.363103 systemd-journald[1035]: Runtime Journal (/run/log/journal/dbf5c35a97ed51c06342f64257bbe0a4) is 8.0M, max 148.8M, 140.8M free. Feb 9 19:24:55.900000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:24:55.900000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:24:56.357000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:24:56.428885 kernel: audit: type=1300 audit(1707506696.357:90): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffff3894540 a2=4000 a3=7ffff38945dc items=0 ppid=1 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:24:56.429016 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:24:56.429055 kernel: audit: type=1327 audit(1707506696.357:90): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:24:56.357000 audit[1035]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffff3894540 a2=4000 a3=7ffff38945dc items=0 ppid=1 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:24:56.357000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:24:56.454340 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:24:56.475353 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:24:56.484357 systemd[1]: Started systemd-journald.service. Feb 9 19:24:56.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.494700 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:24:56.517348 kernel: audit: type=1130 audit(1707506696.491:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.524749 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:24:56.531541 systemd[1]: Mounted media.mount. Feb 9 19:24:56.538522 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:24:56.546749 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:24:56.556855 systemd[1]: Mounted tmp.mount. Feb 9 19:24:56.565150 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:24:56.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.575118 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:24:56.598617 kernel: audit: type=1130 audit(1707506696.573:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.607089 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:24:56.607383 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:24:56.630388 kernel: audit: type=1130 audit(1707506696.605:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.638150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:24:56.638439 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:24:56.683142 kernel: audit: type=1130 audit(1707506696.636:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.683259 kernel: audit: type=1131 audit(1707506696.636:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.692015 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:24:56.692337 systemd[1]: Finished modprobe@drm.service. Feb 9 19:24:56.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.736127 kernel: audit: type=1130 audit(1707506696.690:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.736379 kernel: audit: type=1131 audit(1707506696.690:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.744972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:24:56.745222 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:24:56.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.753901 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:24:56.754196 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:24:56.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.762910 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:24:56.763193 systemd[1]: Finished modprobe@loop.service. Feb 9 19:24:56.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.773078 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:24:56.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.781932 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:24:56.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.791110 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:24:56.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.799993 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:24:56.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.809067 systemd[1]: Reached target network-pre.target. Feb 9 19:24:56.819083 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:24:56.829190 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:24:56.836465 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:24:56.839789 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:24:56.849658 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:24:56.858477 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:24:56.860579 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:24:56.867778 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:24:56.869745 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:24:56.874092 systemd-journald[1035]: Time spent on flushing to /var/log/journal/dbf5c35a97ed51c06342f64257bbe0a4 is 69.996ms for 1138 entries. Feb 9 19:24:56.874092 systemd-journald[1035]: System Journal (/var/log/journal/dbf5c35a97ed51c06342f64257bbe0a4) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:24:56.964243 systemd-journald[1035]: Received client request to flush runtime journal. Feb 9 19:24:56.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.886623 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:24:56.896394 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:24:56.908174 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:24:56.965716 udevadm[1059]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:24:56.916590 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:24:56.925950 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:24:56.938040 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:24:56.949447 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:24:56.964795 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:24:56.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.974312 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:24:56.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:56.985774 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:24:57.047633 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:24:57.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.651219 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:24:57.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.661760 systemd[1]: Starting systemd-udevd.service... Feb 9 19:24:57.687585 systemd-udevd[1070]: Using default interface naming scheme 'v252'. Feb 9 19:24:57.749929 systemd[1]: Started systemd-udevd.service. Feb 9 19:24:57.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.761764 systemd[1]: Starting systemd-networkd.service... Feb 9 19:24:57.778912 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:24:57.830977 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:24:57.883864 systemd[1]: Started systemd-userdbd.service. Feb 9 19:24:57.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.970324 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:24:57.999318 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:24:57.953000 audit[1079]: AVC avc: denied { confidentiality } for pid=1079 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:24:57.953000 audit[1079]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562ffd77b5f0 a1=32194 a2=7f05a1c4abc5 a3=5 items=108 ppid=1070 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:24:57.953000 audit: CWD cwd="/" Feb 9 19:24:57.953000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=1 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=2 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=3 name=(null) inode=14374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=4 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=5 name=(null) inode=14375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=6 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=7 name=(null) inode=14376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=8 name=(null) inode=14376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=9 name=(null) inode=14377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=10 name=(null) inode=14376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=11 name=(null) inode=14378 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=12 name=(null) inode=14376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=13 name=(null) inode=14379 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=14 name=(null) inode=14376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=15 name=(null) inode=14380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=16 name=(null) inode=14376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=17 name=(null) inode=14381 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=18 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=19 name=(null) inode=14382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=20 name=(null) inode=14382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=21 name=(null) inode=14383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=22 name=(null) inode=14382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=23 name=(null) inode=14384 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=24 name=(null) inode=14382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=25 name=(null) inode=14385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=26 name=(null) inode=14382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=27 name=(null) inode=14386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=28 name=(null) inode=14382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=29 name=(null) inode=14387 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=30 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=31 name=(null) inode=14388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=32 name=(null) inode=14388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=33 name=(null) inode=14389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=34 name=(null) inode=14388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=35 name=(null) inode=14390 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=36 name=(null) inode=14388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=37 name=(null) inode=14391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=38 name=(null) inode=14388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=39 name=(null) inode=14392 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=40 name=(null) inode=14388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=41 name=(null) inode=14393 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=42 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=43 name=(null) inode=14394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=44 name=(null) inode=14394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=45 name=(null) inode=14395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=46 name=(null) inode=14394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=47 name=(null) inode=14396 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=48 name=(null) inode=14394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=49 name=(null) inode=14397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=50 name=(null) inode=14394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=51 name=(null) inode=14398 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=52 name=(null) inode=14394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=53 name=(null) inode=14399 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=55 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=56 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=57 name=(null) inode=14401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=58 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=59 name=(null) inode=14402 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=60 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=61 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=62 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=63 name=(null) inode=14404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=64 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=65 name=(null) inode=14405 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=66 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=67 name=(null) inode=14406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=68 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=69 name=(null) inode=14407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=70 name=(null) inode=14403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=71 name=(null) inode=14408 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=72 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=73 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=74 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=75 name=(null) inode=14410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=76 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=77 name=(null) inode=14411 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=78 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=79 name=(null) inode=14412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=80 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=81 name=(null) inode=14413 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=82 name=(null) inode=14409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=83 name=(null) inode=14414 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=84 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=85 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=86 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=87 name=(null) inode=14416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=88 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=89 name=(null) inode=14417 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=90 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=91 name=(null) inode=14418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=92 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=93 name=(null) inode=14419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=94 name=(null) inode=14415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=95 name=(null) inode=14420 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=96 name=(null) inode=14400 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=97 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=98 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=99 name=(null) inode=14422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=100 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=101 name=(null) inode=14423 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=102 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=103 name=(null) inode=14424 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=104 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=105 name=(null) inode=14425 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=106 name=(null) inode=14421 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PATH item=107 name=(null) inode=14426 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:24:57.953000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:24:58.064887 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 9 19:24:58.078319 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 9 19:24:58.088344 kernel: EDAC MC: Ver: 3.0.0 Feb 9 19:24:58.103344 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 9 19:24:58.108747 systemd-networkd[1083]: lo: Link UP Feb 9 19:24:58.108759 systemd-networkd[1083]: lo: Gained carrier Feb 9 19:24:58.109517 systemd-networkd[1083]: Enumeration completed Feb 9 19:24:58.109717 systemd[1]: Started systemd-networkd.service. Feb 9 19:24:58.110532 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:24:58.121326 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1087) Feb 9 19:24:58.122659 systemd-networkd[1083]: eth0: Link UP Feb 9 19:24:58.122672 systemd-networkd[1083]: eth0: Gained carrier Feb 9 19:24:58.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.156334 kernel: ACPI: button: Sleep Button [SLPF] Feb 9 19:24:58.173490 systemd-networkd[1083]: eth0: DHCPv4 address 10.128.0.66/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 9 19:24:58.204491 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:24:58.213314 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:24:58.233029 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:24:58.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.243359 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:24:58.277189 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:24:58.311879 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:24:58.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.320849 systemd[1]: Reached target cryptsetup.target. Feb 9 19:24:58.331119 systemd[1]: Starting lvm2-activation.service... Feb 9 19:24:58.337778 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:24:58.371052 systemd[1]: Finished lvm2-activation.service. Feb 9 19:24:58.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.382034 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:24:58.390532 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:24:58.390590 systemd[1]: Reached target local-fs.target. Feb 9 19:24:58.399511 systemd[1]: Reached target machines.target. Feb 9 19:24:58.410273 systemd[1]: Starting ldconfig.service... Feb 9 19:24:58.418576 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:24:58.418675 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:24:58.420708 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:24:58.430100 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:24:58.442659 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:24:58.443087 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:24:58.443199 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:24:58.445238 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:24:58.446044 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1113 (bootctl) Feb 9 19:24:58.451011 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:24:58.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.481969 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:24:58.487351 systemd-tmpfiles[1117]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:24:58.492878 systemd-tmpfiles[1117]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:24:58.498635 systemd-tmpfiles[1117]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:24:58.633663 systemd-fsck[1122]: fsck.fat 4.2 (2021-01-31) Feb 9 19:24:58.633663 systemd-fsck[1122]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:24:58.639786 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:24:58.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.652632 systemd[1]: Mounting boot.mount... Feb 9 19:24:58.682589 systemd[1]: Mounted boot.mount. Feb 9 19:24:58.722326 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:24:58.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.835406 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:24:58.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.847577 systemd[1]: Starting audit-rules.service... Feb 9 19:24:58.858467 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:24:58.869761 systemd[1]: Starting oem-gce-enable-oslogin.service... Feb 9 19:24:58.884140 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:24:58.900011 systemd[1]: Starting systemd-resolved.service... Feb 9 19:24:58.911088 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:24:58.921223 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:24:58.931672 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:24:58.936000 audit[1150]: SYSTEM_BOOT pid=1150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.941219 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Feb 9 19:24:58.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.941676 systemd[1]: Finished oem-gce-enable-oslogin.service. Feb 9 19:24:58.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.955949 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:24:58.959737 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:24:58.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:58.998178 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:24:59.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:59.014000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:24:59.014000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0b49da70 a2=420 a3=0 items=0 ppid=1129 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:24:59.014000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:24:59.015852 augenrules[1162]: No rules Feb 9 19:24:59.017938 systemd[1]: Finished audit-rules.service. Feb 9 19:24:59.128671 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:24:59.131610 systemd-timesyncd[1147]: Contacted time server 169.254.169.254:123 (169.254.169.254). Feb 9 19:24:59.131693 systemd-timesyncd[1147]: Initial clock synchronization to Fri 2024-02-09 19:24:59.101209 UTC. Feb 9 19:24:59.137807 systemd[1]: Reached target time-set.target. Feb 9 19:24:59.149208 systemd-resolved[1143]: Positive Trust Anchors: Feb 9 19:24:59.149228 systemd-resolved[1143]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:24:59.149311 systemd-resolved[1143]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:24:59.190171 systemd-resolved[1143]: Defaulting to hostname 'linux'. Feb 9 19:24:59.192940 systemd[1]: Started systemd-resolved.service. Feb 9 19:24:59.200552 systemd[1]: Reached target network.target. Feb 9 19:24:59.210438 systemd[1]: Reached target nss-lookup.target. Feb 9 19:24:59.257397 ldconfig[1112]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:24:59.435422 systemd[1]: Finished ldconfig.service. Feb 9 19:24:59.445446 systemd[1]: Starting systemd-update-done.service... Feb 9 19:24:59.457795 systemd[1]: Finished systemd-update-done.service. Feb 9 19:24:59.467008 systemd[1]: Reached target sysinit.target. Feb 9 19:24:59.476670 systemd[1]: Started motdgen.path. Feb 9 19:24:59.483696 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:24:59.494959 systemd[1]: Started logrotate.timer. Feb 9 19:24:59.502794 systemd[1]: Started mdadm.timer. Feb 9 19:24:59.509582 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:24:59.518572 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:24:59.518650 systemd[1]: Reached target paths.target. Feb 9 19:24:59.525476 systemd[1]: Reached target timers.target. Feb 9 19:24:59.533172 systemd[1]: Listening on dbus.socket. Feb 9 19:24:59.543541 systemd[1]: Starting docker.socket... Feb 9 19:24:59.553132 systemd[1]: Listening on sshd.socket. Feb 9 19:24:59.560670 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:24:59.561548 systemd[1]: Listening on docker.socket. Feb 9 19:24:59.569564 systemd[1]: Reached target sockets.target. Feb 9 19:24:59.578468 systemd[1]: Reached target basic.target. Feb 9 19:24:59.585677 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:24:59.585762 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:24:59.585804 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:24:59.587640 systemd[1]: Starting containerd.service... Feb 9 19:24:59.597660 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:24:59.610544 systemd[1]: Starting dbus.service... Feb 9 19:24:59.619607 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:24:59.629588 systemd[1]: Starting extend-filesystems.service... Feb 9 19:24:59.637460 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:24:59.639712 systemd[1]: Starting motdgen.service... Feb 9 19:24:59.644158 jq[1181]: false Feb 9 19:24:59.648855 systemd[1]: Starting oem-gce.service... Feb 9 19:24:59.658654 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:24:59.668934 systemd[1]: Starting prepare-critools.service... Feb 9 19:24:59.678406 systemd[1]: Starting prepare-helm.service... Feb 9 19:24:59.687480 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:24:59.696449 systemd[1]: Starting sshd-keygen.service... Feb 9 19:24:59.707122 systemd[1]: Starting systemd-logind.service... Feb 9 19:24:59.714492 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:24:59.714793 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 9 19:24:59.716968 systemd[1]: Starting update-engine.service... Feb 9 19:24:59.726430 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:24:59.737260 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:24:59.740207 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:24:59.750469 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:24:59.750881 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:24:59.765971 jq[1209]: true Feb 9 19:24:59.758588 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:24:59.759000 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:24:59.774997 systemd[1]: Created slice system-sshd.slice. Feb 9 19:24:59.796316 mkfs.ext4[1226]: mke2fs 1.46.5 (30-Dec-2021) Feb 9 19:24:59.802373 mkfs.ext4[1226]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Feb 9 19:24:59.802373 mkfs.ext4[1226]: Creating filesystem with 262144 4k blocks and 65536 inodes Feb 9 19:24:59.802373 mkfs.ext4[1226]: Filesystem UUID: c8857db1-94ae-4392-b475-d465e8945eaf Feb 9 19:24:59.802373 mkfs.ext4[1226]: Superblock backups stored on blocks: Feb 9 19:24:59.802373 mkfs.ext4[1226]: 32768, 98304, 163840, 229376 Feb 9 19:24:59.802373 mkfs.ext4[1226]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 9 19:24:59.802373 mkfs.ext4[1226]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 9 19:24:59.803262 tar[1214]: ./ Feb 9 19:24:59.803704 tar[1214]: ./macvlan Feb 9 19:24:59.811521 mkfs.ext4[1226]: Creating journal (8192 blocks): done Feb 9 19:24:59.813813 jq[1220]: true Feb 9 19:24:59.825801 mkfs.ext4[1226]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 9 19:24:59.849087 extend-filesystems[1182]: Found sda Feb 9 19:24:59.861561 extend-filesystems[1182]: Found sda1 Feb 9 19:24:59.861561 extend-filesystems[1182]: Found sda2 Feb 9 19:24:59.861561 extend-filesystems[1182]: Found sda3 Feb 9 19:24:59.861561 extend-filesystems[1182]: Found usr Feb 9 19:24:59.861561 extend-filesystems[1182]: Found sda4 Feb 9 19:24:59.861561 extend-filesystems[1182]: Found sda6 Feb 9 19:24:59.861561 extend-filesystems[1182]: Found sda7 Feb 9 19:24:59.861561 extend-filesystems[1182]: Found sda9 Feb 9 19:24:59.861561 extend-filesystems[1182]: Checking size of /dev/sda9 Feb 9 19:24:59.881895 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:24:59.882277 systemd[1]: Finished motdgen.service. Feb 9 19:24:59.933601 umount[1241]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Feb 9 19:24:59.949326 tar[1215]: crictl Feb 9 19:24:59.950498 update_engine[1202]: I0209 19:24:59.950157 1202 main.cc:92] Flatcar Update Engine starting Feb 9 19:24:59.956345 kernel: loop0: detected capacity change from 0 to 2097152 Feb 9 19:24:59.960639 tar[1218]: linux-amd64/helm Feb 9 19:24:59.961415 dbus-daemon[1180]: [system] SELinux support is enabled Feb 9 19:24:59.962131 systemd[1]: Started dbus.service. Feb 9 19:24:59.966933 dbus-daemon[1180]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1083 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:24:59.973076 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:24:59.973135 systemd[1]: Reached target system-config.target. Feb 9 19:24:59.974055 update_engine[1202]: I0209 19:24:59.973888 1202 update_check_scheduler.cc:74] Next update check in 3m2s Feb 9 19:24:59.977242 extend-filesystems[1182]: Resized partition /dev/sda9 Feb 9 19:24:59.997648 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 9 19:24:59.997722 extend-filesystems[1261]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:24:59.981502 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:25:00.009498 dbus-daemon[1180]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:24:59.981535 systemd[1]: Reached target user-config.target. Feb 9 19:25:00.009396 systemd[1]: Started update-engine.service. Feb 9 19:25:00.029177 systemd[1]: Started locksmithd.service. Feb 9 19:25:00.046060 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:25:00.050522 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 9 19:25:00.078391 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:25:00.078549 extend-filesystems[1261]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 9 19:25:00.078549 extend-filesystems[1261]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 9 19:25:00.078549 extend-filesystems[1261]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 9 19:25:00.142091 extend-filesystems[1182]: Resized filesystem in /dev/sda9 Feb 9 19:25:00.155762 bash[1263]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.140 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.144 INFO Fetch failed with 404: resource not found Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.145 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.146 INFO Fetch successful Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.146 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.147 INFO Fetch failed with 404: resource not found Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.147 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.148 INFO Fetch failed with 404: resource not found Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.148 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 9 19:25:00.155910 coreos-metadata[1179]: Feb 09 19:25:00.149 INFO Fetch successful Feb 9 19:25:00.083014 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:25:00.083435 systemd[1]: Finished extend-filesystems.service. Feb 9 19:25:00.106400 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:25:00.135504 systemd-networkd[1083]: eth0: Gained IPv6LL Feb 9 19:25:00.166712 unknown[1179]: wrote ssh authorized keys file for user: core Feb 9 19:25:00.200321 env[1221]: time="2024-02-09T19:25:00.198693789Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:25:00.214812 update-ssh-keys[1275]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:25:00.216093 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:25:00.307855 systemd-logind[1201]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:25:00.310736 systemd-logind[1201]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 9 19:25:00.310930 systemd-logind[1201]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:25:00.317689 systemd-logind[1201]: New seat seat0. Feb 9 19:25:00.326486 systemd[1]: Started systemd-logind.service. Feb 9 19:25:00.328149 tar[1214]: ./static Feb 9 19:25:00.420507 env[1221]: time="2024-02-09T19:25:00.420444698Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:25:00.420869 env[1221]: time="2024-02-09T19:25:00.420838160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:00.447801 env[1221]: time="2024-02-09T19:25:00.447736899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:25:00.448039 env[1221]: time="2024-02-09T19:25:00.448014849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:00.448550 env[1221]: time="2024-02-09T19:25:00.448513249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:25:00.448703 env[1221]: time="2024-02-09T19:25:00.448681125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:00.448834 env[1221]: time="2024-02-09T19:25:00.448811421Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:25:00.449313 env[1221]: time="2024-02-09T19:25:00.449205999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:00.449845 env[1221]: time="2024-02-09T19:25:00.449797540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:00.450529 env[1221]: time="2024-02-09T19:25:00.450447395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:00.451005 env[1221]: time="2024-02-09T19:25:00.450955014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:25:00.451157 env[1221]: time="2024-02-09T19:25:00.451132468Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:25:00.451391 env[1221]: time="2024-02-09T19:25:00.451348509Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:25:00.451536 env[1221]: time="2024-02-09T19:25:00.451512625Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:25:00.464519 env[1221]: time="2024-02-09T19:25:00.464425807Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:25:00.464519 env[1221]: time="2024-02-09T19:25:00.464521499Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:25:00.464751 env[1221]: time="2024-02-09T19:25:00.464543790Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:25:00.464751 env[1221]: time="2024-02-09T19:25:00.464620268Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:25:00.464751 env[1221]: time="2024-02-09T19:25:00.464645592Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:25:00.464751 env[1221]: time="2024-02-09T19:25:00.464726525Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:25:00.465187 env[1221]: time="2024-02-09T19:25:00.464749395Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:25:00.465187 env[1221]: time="2024-02-09T19:25:00.464995221Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:25:00.465187 env[1221]: time="2024-02-09T19:25:00.465045634Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:25:00.465187 env[1221]: time="2024-02-09T19:25:00.465075387Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:25:00.465187 env[1221]: time="2024-02-09T19:25:00.465099883Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:25:00.465187 env[1221]: time="2024-02-09T19:25:00.465124911Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:25:00.465517 env[1221]: time="2024-02-09T19:25:00.465366112Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:25:00.465517 env[1221]: time="2024-02-09T19:25:00.465505050Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:25:00.466210 env[1221]: time="2024-02-09T19:25:00.466177766Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:25:00.466333 env[1221]: time="2024-02-09T19:25:00.466235569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466333 env[1221]: time="2024-02-09T19:25:00.466262594Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:25:00.466453 env[1221]: time="2024-02-09T19:25:00.466352277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466453 env[1221]: time="2024-02-09T19:25:00.466377908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466453 env[1221]: time="2024-02-09T19:25:00.466399321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466453 env[1221]: time="2024-02-09T19:25:00.466419355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466453 env[1221]: time="2024-02-09T19:25:00.466442913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466689 env[1221]: time="2024-02-09T19:25:00.466466028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466689 env[1221]: time="2024-02-09T19:25:00.466486047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466689 env[1221]: time="2024-02-09T19:25:00.466507562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466689 env[1221]: time="2024-02-09T19:25:00.466531460Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:25:00.466887 env[1221]: time="2024-02-09T19:25:00.466722943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466887 env[1221]: time="2024-02-09T19:25:00.466749493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466887 env[1221]: time="2024-02-09T19:25:00.466772728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.466887 env[1221]: time="2024-02-09T19:25:00.466794217Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:25:00.466887 env[1221]: time="2024-02-09T19:25:00.466821116Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:25:00.466887 env[1221]: time="2024-02-09T19:25:00.466841115Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:25:00.467372 env[1221]: time="2024-02-09T19:25:00.467041892Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:25:00.467372 env[1221]: time="2024-02-09T19:25:00.467120038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:25:00.483594 env[1221]: time="2024-02-09T19:25:00.483445462Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:25:00.487663 env[1221]: time="2024-02-09T19:25:00.483593211Z" level=info msg="Connect containerd service" Feb 9 19:25:00.487663 env[1221]: time="2024-02-09T19:25:00.483660719Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:25:00.487663 env[1221]: time="2024-02-09T19:25:00.484845429Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:25:00.487663 env[1221]: time="2024-02-09T19:25:00.485885212Z" level=info msg="Start subscribing containerd event" Feb 9 19:25:00.487663 env[1221]: time="2024-02-09T19:25:00.485968824Z" level=info msg="Start recovering state" Feb 9 19:25:00.489390 env[1221]: time="2024-02-09T19:25:00.489351815Z" level=info msg="Start event monitor" Feb 9 19:25:00.489518 env[1221]: time="2024-02-09T19:25:00.489490759Z" level=info msg="Start snapshots syncer" Feb 9 19:25:00.489614 env[1221]: time="2024-02-09T19:25:00.489523634Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:25:00.489614 env[1221]: time="2024-02-09T19:25:00.489538019Z" level=info msg="Start streaming server" Feb 9 19:25:00.489867 env[1221]: time="2024-02-09T19:25:00.489842253Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:25:00.489949 env[1221]: time="2024-02-09T19:25:00.489923190Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:25:00.490167 systemd[1]: Started containerd.service. Feb 9 19:25:00.491313 env[1221]: time="2024-02-09T19:25:00.490526788Z" level=info msg="containerd successfully booted in 0.308411s" Feb 9 19:25:00.553058 tar[1214]: ./vlan Feb 9 19:25:00.697834 tar[1214]: ./portmap Feb 9 19:25:00.707847 dbus-daemon[1180]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:25:00.708030 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:25:00.708668 dbus-daemon[1180]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1268 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:25:00.721440 systemd[1]: Starting polkit.service... Feb 9 19:25:00.810533 polkitd[1285]: Started polkitd version 121 Feb 9 19:25:00.835683 polkitd[1285]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:25:00.835973 polkitd[1285]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:25:00.838479 polkitd[1285]: Finished loading, compiling and executing 2 rules Feb 9 19:25:00.840482 dbus-daemon[1180]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:25:00.840721 systemd[1]: Started polkit.service. Feb 9 19:25:00.841167 polkitd[1285]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:25:00.882017 systemd-hostnamed[1268]: Hostname set to (transient) Feb 9 19:25:00.885038 systemd-resolved[1143]: System hostname changed to 'ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal'. Feb 9 19:25:00.886169 tar[1214]: ./host-local Feb 9 19:25:00.979446 tar[1214]: ./vrf Feb 9 19:25:01.088188 tar[1214]: ./bridge Feb 9 19:25:01.220097 tar[1214]: ./tuning Feb 9 19:25:01.348014 tar[1214]: ./firewall Feb 9 19:25:01.521246 tar[1214]: ./host-device Feb 9 19:25:01.649262 tar[1214]: ./sbr Feb 9 19:25:01.781445 tar[1214]: ./loopback Feb 9 19:25:01.897858 tar[1214]: ./dhcp Feb 9 19:25:02.059414 tar[1218]: linux-amd64/LICENSE Feb 9 19:25:02.060114 tar[1218]: linux-amd64/README.md Feb 9 19:25:02.073359 systemd[1]: Finished prepare-helm.service. Feb 9 19:25:02.160207 tar[1214]: ./ptp Feb 9 19:25:02.223655 systemd[1]: Finished prepare-critools.service. Feb 9 19:25:02.293220 tar[1214]: ./ipvlan Feb 9 19:25:02.397779 tar[1214]: ./bandwidth Feb 9 19:25:02.537059 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:25:03.504411 sshd_keygen[1227]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:25:03.549960 systemd[1]: Finished sshd-keygen.service. Feb 9 19:25:03.560501 locksmithd[1265]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:25:03.561343 systemd[1]: Starting issuegen.service... Feb 9 19:25:03.572462 systemd[1]: Started sshd@0-10.128.0.66:22-147.75.109.163:59732.service. Feb 9 19:25:03.583640 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:25:03.584263 systemd[1]: Finished issuegen.service. Feb 9 19:25:03.595122 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:25:03.610281 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:25:03.621546 systemd[1]: Started getty@tty1.service. Feb 9 19:25:03.632642 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:25:03.641868 systemd[1]: Reached target getty.target. Feb 9 19:25:03.929275 sshd[1319]: Accepted publickey for core from 147.75.109.163 port 59732 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:25:03.932995 sshd[1319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:03.957176 systemd[1]: Created slice user-500.slice. Feb 9 19:25:03.966662 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:25:03.979452 systemd-logind[1201]: New session 1 of user core. Feb 9 19:25:03.989124 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:25:04.000749 systemd[1]: Starting user@500.service... Feb 9 19:25:04.028085 (systemd)[1329]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:04.215481 systemd[1329]: Queued start job for default target default.target. Feb 9 19:25:04.216130 systemd[1329]: Reached target paths.target. Feb 9 19:25:04.216201 systemd[1329]: Reached target sockets.target. Feb 9 19:25:04.216229 systemd[1329]: Reached target timers.target. Feb 9 19:25:04.216249 systemd[1329]: Reached target basic.target. Feb 9 19:25:04.216348 systemd[1329]: Reached target default.target. Feb 9 19:25:04.216399 systemd[1329]: Startup finished in 176ms. Feb 9 19:25:04.216470 systemd[1]: Started user@500.service. Feb 9 19:25:04.225849 systemd[1]: Started session-1.scope. Feb 9 19:25:04.458275 systemd[1]: Started sshd@1-10.128.0.66:22-147.75.109.163:59736.service. Feb 9 19:25:04.764992 sshd[1338]: Accepted publickey for core from 147.75.109.163 port 59736 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:25:04.767160 sshd[1338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:04.775370 systemd[1]: Started session-2.scope. Feb 9 19:25:04.776754 systemd-logind[1201]: New session 2 of user core. Feb 9 19:25:04.992764 sshd[1338]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:04.998602 systemd[1]: sshd@1-10.128.0.66:22-147.75.109.163:59736.service: Deactivated successfully. Feb 9 19:25:04.999766 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:25:05.002240 systemd-logind[1201]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:25:05.009576 systemd-logind[1201]: Removed session 2. Feb 9 19:25:05.035940 systemd[1]: Started sshd@2-10.128.0.66:22-147.75.109.163:48248.service. Feb 9 19:25:05.344987 sshd[1345]: Accepted publickey for core from 147.75.109.163 port 48248 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:25:05.346706 sshd[1345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:05.354545 systemd[1]: Started session-3.scope. Feb 9 19:25:05.354855 systemd-logind[1201]: New session 3 of user core. Feb 9 19:25:05.563602 sshd[1345]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:05.567802 systemd[1]: sshd@2-10.128.0.66:22-147.75.109.163:48248.service: Deactivated successfully. Feb 9 19:25:05.569007 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:25:05.571054 systemd-logind[1201]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:25:05.573755 systemd-logind[1201]: Removed session 3. Feb 9 19:25:06.356622 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Feb 9 19:25:08.471375 kernel: loop0: detected capacity change from 0 to 2097152 Feb 9 19:25:08.501135 systemd-nspawn[1354]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Feb 9 19:25:08.501135 systemd-nspawn[1354]: Press ^] three times within 1s to kill container. Feb 9 19:25:08.519333 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:25:08.601875 systemd[1]: Started oem-gce.service. Feb 9 19:25:08.609998 systemd[1]: Reached target multi-user.target. Feb 9 19:25:08.623533 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:25:08.642937 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:25:08.643392 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:25:08.644415 systemd[1]: Startup finished in 11.062s (kernel) + 16.850s (userspace) = 27.913s. Feb 9 19:25:08.696872 systemd-nspawn[1354]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 9 19:25:08.697092 systemd-nspawn[1354]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 9 19:25:08.697092 systemd-nspawn[1354]: + /usr/bin/google_instance_setup Feb 9 19:25:09.602166 instance-setup[1362]: INFO Running google_set_multiqueue. Feb 9 19:25:09.621596 instance-setup[1362]: INFO Set channels for eth0 to 2. Feb 9 19:25:09.625871 instance-setup[1362]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 9 19:25:09.627538 instance-setup[1362]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 9 19:25:09.628000 instance-setup[1362]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 9 19:25:09.629549 instance-setup[1362]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 9 19:25:09.629918 instance-setup[1362]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 9 19:25:09.631303 instance-setup[1362]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 9 19:25:09.631756 instance-setup[1362]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 9 19:25:09.633195 instance-setup[1362]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 9 19:25:09.646849 instance-setup[1362]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 9 19:25:09.647237 instance-setup[1362]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 9 19:25:09.694071 systemd-nspawn[1354]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 9 19:25:10.061575 startup-script[1393]: INFO Starting startup scripts. Feb 9 19:25:10.076932 startup-script[1393]: INFO No startup scripts found in metadata. Feb 9 19:25:10.077098 startup-script[1393]: INFO Finished running startup scripts. Feb 9 19:25:10.123240 systemd-nspawn[1354]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 9 19:25:10.123240 systemd-nspawn[1354]: + daemon_pids=() Feb 9 19:25:10.123537 systemd-nspawn[1354]: + for d in accounts clock_skew network Feb 9 19:25:10.123634 systemd-nspawn[1354]: + daemon_pids+=($!) Feb 9 19:25:10.123634 systemd-nspawn[1354]: + for d in accounts clock_skew network Feb 9 19:25:10.123892 systemd-nspawn[1354]: + daemon_pids+=($!) Feb 9 19:25:10.123995 systemd-nspawn[1354]: + for d in accounts clock_skew network Feb 9 19:25:10.124258 systemd-nspawn[1354]: + daemon_pids+=($!) Feb 9 19:25:10.124497 systemd-nspawn[1354]: + NOTIFY_SOCKET=/run/systemd/notify Feb 9 19:25:10.124497 systemd-nspawn[1354]: + /usr/bin/systemd-notify --ready Feb 9 19:25:10.124615 systemd-nspawn[1354]: + /usr/bin/google_network_daemon Feb 9 19:25:10.124933 systemd-nspawn[1354]: + /usr/bin/google_accounts_daemon Feb 9 19:25:10.125434 systemd-nspawn[1354]: + /usr/bin/google_clock_skew_daemon Feb 9 19:25:10.192916 systemd-nspawn[1354]: + wait -n 36 37 38 Feb 9 19:25:10.741306 google-clock-skew[1397]: INFO Starting Google Clock Skew daemon. Feb 9 19:25:10.760435 google-clock-skew[1397]: INFO Clock drift token has changed: 0. Feb 9 19:25:10.767360 systemd-nspawn[1354]: hwclock: Cannot access the Hardware Clock via any known method. Feb 9 19:25:10.767360 systemd-nspawn[1354]: hwclock: Use the --verbose option to see the details of our search for an access method. Feb 9 19:25:10.768762 google-clock-skew[1397]: WARNING Failed to sync system time with hardware clock. Feb 9 19:25:10.853955 google-networking[1398]: INFO Starting Google Networking daemon. Feb 9 19:25:10.915804 groupadd[1408]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 9 19:25:10.920653 groupadd[1408]: group added to /etc/gshadow: name=google-sudoers Feb 9 19:25:10.926197 groupadd[1408]: new group: name=google-sudoers, GID=1000 Feb 9 19:25:10.942648 google-accounts[1396]: INFO Starting Google Accounts daemon. Feb 9 19:25:10.970423 google-accounts[1396]: WARNING OS Login not installed. Feb 9 19:25:10.971599 google-accounts[1396]: INFO Creating a new user account for 0. Feb 9 19:25:10.977495 systemd-nspawn[1354]: useradd: invalid user name '0': use --badname to ignore Feb 9 19:25:10.978142 google-accounts[1396]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 9 19:25:15.596010 systemd[1]: Started sshd@3-10.128.0.66:22-147.75.109.163:54042.service. Feb 9 19:25:15.883099 sshd[1418]: Accepted publickey for core from 147.75.109.163 port 54042 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:25:15.884957 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:15.891025 systemd-logind[1201]: New session 4 of user core. Feb 9 19:25:15.891734 systemd[1]: Started session-4.scope. Feb 9 19:25:16.095825 sshd[1418]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:16.100171 systemd[1]: sshd@3-10.128.0.66:22-147.75.109.163:54042.service: Deactivated successfully. Feb 9 19:25:16.101806 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:25:16.101844 systemd-logind[1201]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:25:16.103441 systemd-logind[1201]: Removed session 4. Feb 9 19:25:16.140203 systemd[1]: Started sshd@4-10.128.0.66:22-147.75.109.163:54050.service. Feb 9 19:25:16.425975 sshd[1425]: Accepted publickey for core from 147.75.109.163 port 54050 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:25:16.427876 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:16.434378 systemd-logind[1201]: New session 5 of user core. Feb 9 19:25:16.434845 systemd[1]: Started session-5.scope. Feb 9 19:25:16.634109 sshd[1425]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:16.638132 systemd[1]: sshd@4-10.128.0.66:22-147.75.109.163:54050.service: Deactivated successfully. Feb 9 19:25:16.639723 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:25:16.639779 systemd-logind[1201]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:25:16.641421 systemd-logind[1201]: Removed session 5. Feb 9 19:25:16.677468 systemd[1]: Started sshd@5-10.128.0.66:22-147.75.109.163:54058.service. Feb 9 19:25:16.961355 sshd[1432]: Accepted publickey for core from 147.75.109.163 port 54058 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:25:16.963160 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:16.969908 systemd[1]: Started session-6.scope. Feb 9 19:25:16.970233 systemd-logind[1201]: New session 6 of user core. Feb 9 19:25:17.173531 sshd[1432]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:17.177414 systemd[1]: sshd@5-10.128.0.66:22-147.75.109.163:54058.service: Deactivated successfully. Feb 9 19:25:17.179051 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:25:17.179242 systemd-logind[1201]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:25:17.181020 systemd-logind[1201]: Removed session 6. Feb 9 19:25:17.216697 systemd[1]: Started sshd@6-10.128.0.66:22-147.75.109.163:54074.service. Feb 9 19:25:17.499807 sshd[1439]: Accepted publickey for core from 147.75.109.163 port 54074 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:25:17.501657 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:17.508231 systemd[1]: Started session-7.scope. Feb 9 19:25:17.508846 systemd-logind[1201]: New session 7 of user core. Feb 9 19:25:17.697063 sudo[1443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:25:17.697474 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:25:17.707610 dbus-daemon[1180]: \xd0\u000d\u0014\xdcMV: received setenforce notice (enforcing=161492416) Feb 9 19:25:17.709906 sudo[1443]: pam_unix(sudo:session): session closed for user root Feb 9 19:25:17.754058 sshd[1439]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:17.759314 systemd[1]: sshd@6-10.128.0.66:22-147.75.109.163:54074.service: Deactivated successfully. Feb 9 19:25:17.760635 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:25:17.762436 systemd-logind[1201]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:25:17.763981 systemd-logind[1201]: Removed session 7. Feb 9 19:25:17.797640 systemd[1]: Started sshd@7-10.128.0.66:22-147.75.109.163:54088.service. Feb 9 19:25:18.085073 sshd[1447]: Accepted publickey for core from 147.75.109.163 port 54088 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:25:18.086783 sshd[1447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:18.093399 systemd[1]: Started session-8.scope. Feb 9 19:25:18.093713 systemd-logind[1201]: New session 8 of user core. Feb 9 19:25:18.261705 sudo[1452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:25:18.262105 sudo[1452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:25:18.266495 sudo[1452]: pam_unix(sudo:session): session closed for user root Feb 9 19:25:18.278919 sudo[1451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:25:18.279329 sudo[1451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:25:18.293014 systemd[1]: Stopping audit-rules.service... Feb 9 19:25:18.293000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:25:18.301137 kernel: kauditd_printk_skb: 149 callbacks suppressed Feb 9 19:25:18.301331 kernel: audit: type=1305 audit(1707506718.293:134): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:25:18.301404 auditctl[1455]: No rules Feb 9 19:25:18.302647 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:25:18.303075 systemd[1]: Stopped audit-rules.service. Feb 9 19:25:18.306450 systemd[1]: Starting audit-rules.service... Feb 9 19:25:18.293000 audit[1455]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed16a8ab0 a2=420 a3=0 items=0 ppid=1 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:18.293000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:25:18.359507 augenrules[1473]: No rules Feb 9 19:25:18.363865 kernel: audit: type=1300 audit(1707506718.293:134): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed16a8ab0 a2=420 a3=0 items=0 ppid=1 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:18.363952 kernel: audit: type=1327 audit(1707506718.293:134): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:25:18.363986 kernel: audit: type=1131 audit(1707506718.301:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.361109 systemd[1]: Finished audit-rules.service. Feb 9 19:25:18.362706 sudo[1451]: pam_unix(sudo:session): session closed for user root Feb 9 19:25:18.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.387326 kernel: audit: type=1130 audit(1707506718.358:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.358000 audit[1451]: USER_END pid=1451 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.411427 sshd[1447]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:18.416805 systemd-logind[1201]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:25:18.419129 systemd[1]: sshd@7-10.128.0.66:22-147.75.109.163:54088.service: Deactivated successfully. Feb 9 19:25:18.420388 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:25:18.422239 systemd-logind[1201]: Removed session 8. Feb 9 19:25:18.358000 audit[1451]: CRED_DISP pid=1451 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.457367 kernel: audit: type=1106 audit(1707506718.358:137): pid=1451 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.457514 kernel: audit: type=1104 audit(1707506718.358:138): pid=1451 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.457553 kernel: audit: type=1106 audit(1707506718.407:139): pid=1447 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:18.407000 audit[1447]: USER_END pid=1447 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:18.465002 systemd[1]: Started sshd@8-10.128.0.66:22-147.75.109.163:54098.service. Feb 9 19:25:18.490173 kernel: audit: type=1104 audit(1707506718.407:140): pid=1447 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:18.407000 audit[1447]: CRED_DISP pid=1447 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:18.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.66:22-147.75.109.163:54088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.539687 kernel: audit: type=1131 audit(1707506718.418:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.66:22-147.75.109.163:54088 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.66:22-147.75.109.163:54098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.779000 audit[1480]: USER_ACCT pid=1480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:18.781142 sshd[1480]: Accepted publickey for core from 147.75.109.163 port 54098 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:25:18.781000 audit[1480]: CRED_ACQ pid=1480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:18.781000 audit[1480]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdaa6bc930 a2=3 a3=0 items=0 ppid=1 pid=1480 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:18.781000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:25:18.783064 sshd[1480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:18.789856 systemd[1]: Started session-9.scope. Feb 9 19:25:18.790516 systemd-logind[1201]: New session 9 of user core. Feb 9 19:25:18.798000 audit[1480]: USER_START pid=1480 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:18.800000 audit[1483]: CRED_ACQ pid=1483 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:18.960000 audit[1484]: USER_ACCT pid=1484 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.961971 sudo[1484]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:25:18.960000 audit[1484]: CRED_REFR pid=1484 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.962386 sudo[1484]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:25:18.963000 audit[1484]: USER_START pid=1484 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.575925 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:25:19.586921 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:25:19.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.587582 systemd[1]: Reached target network-online.target. Feb 9 19:25:19.590052 systemd[1]: Starting docker.service... Feb 9 19:25:19.642229 env[1501]: time="2024-02-09T19:25:19.642180449Z" level=info msg="Starting up" Feb 9 19:25:19.644327 env[1501]: time="2024-02-09T19:25:19.644280638Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:25:19.644327 env[1501]: time="2024-02-09T19:25:19.644327960Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:25:19.644476 env[1501]: time="2024-02-09T19:25:19.644357188Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:25:19.644476 env[1501]: time="2024-02-09T19:25:19.644372687Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:25:19.646780 env[1501]: time="2024-02-09T19:25:19.646731050Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:25:19.646780 env[1501]: time="2024-02-09T19:25:19.646754355Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:25:19.646919 env[1501]: time="2024-02-09T19:25:19.646787741Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:25:19.646919 env[1501]: time="2024-02-09T19:25:19.646803715Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:25:19.656009 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2216151922-merged.mount: Deactivated successfully. Feb 9 19:25:20.306221 env[1501]: time="2024-02-09T19:25:20.306162137Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:25:20.306221 env[1501]: time="2024-02-09T19:25:20.306195578Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:25:20.306601 env[1501]: time="2024-02-09T19:25:20.306488971Z" level=info msg="Loading containers: start." Feb 9 19:25:20.376000 audit[1532]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.376000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc7bbeac50 a2=0 a3=7ffc7bbeac3c items=0 ppid=1501 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.376000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 19:25:20.379000 audit[1534]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.379000 audit[1534]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff4ee85e60 a2=0 a3=7fff4ee85e4c items=0 ppid=1501 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.379000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 19:25:20.382000 audit[1536]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.382000 audit[1536]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffbb9f8c10 a2=0 a3=7fffbb9f8bfc items=0 ppid=1501 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.382000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:25:20.385000 audit[1538]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.385000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe0bc27190 a2=0 a3=7ffe0bc2717c items=0 ppid=1501 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.385000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:25:20.389000 audit[1540]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.389000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd1b60c520 a2=0 a3=7ffd1b60c50c items=0 ppid=1501 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.389000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 19:25:20.407000 audit[1545]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.407000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc4643dab0 a2=0 a3=7ffc4643da9c items=0 ppid=1501 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.407000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 19:25:20.420000 audit[1547]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.420000 audit[1547]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8a182720 a2=0 a3=7ffe8a18270c items=0 ppid=1501 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.420000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 19:25:20.423000 audit[1549]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.423000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff6b09e120 a2=0 a3=7fff6b09e10c items=0 ppid=1501 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.423000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 19:25:20.426000 audit[1551]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.426000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcaea47b10 a2=0 a3=7ffcaea47afc items=0 ppid=1501 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.426000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:20.442000 audit[1555]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.442000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffda0a6c30 a2=0 a3=7fffda0a6c1c items=0 ppid=1501 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.442000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:20.443000 audit[1556]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.443000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffda1cf6ee0 a2=0 a3=7ffda1cf6ecc items=0 ppid=1501 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.443000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:20.459335 kernel: Initializing XFRM netlink socket Feb 9 19:25:20.507531 env[1501]: time="2024-02-09T19:25:20.507474436Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:25:20.538000 audit[1564]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.538000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc16e68aa0 a2=0 a3=7ffc16e68a8c items=0 ppid=1501 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.538000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 19:25:20.554000 audit[1567]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.554000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd791d5890 a2=0 a3=7ffd791d587c items=0 ppid=1501 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.554000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 19:25:20.560000 audit[1570]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.560000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffef837d4d0 a2=0 a3=7ffef837d4bc items=0 ppid=1501 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.560000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 19:25:20.563000 audit[1572]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.563000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd3d499690 a2=0 a3=7ffd3d49967c items=0 ppid=1501 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.563000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 19:25:20.566000 audit[1574]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.566000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff7fede680 a2=0 a3=7fff7fede66c items=0 ppid=1501 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.566000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 19:25:20.569000 audit[1576]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.569000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffcd8ea9bd0 a2=0 a3=7ffcd8ea9bbc items=0 ppid=1501 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.569000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 19:25:20.572000 audit[1578]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.572000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd5eb71c10 a2=0 a3=7ffd5eb71bfc items=0 ppid=1501 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.572000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 19:25:20.585000 audit[1581]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.585000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffecabc5540 a2=0 a3=7ffecabc552c items=0 ppid=1501 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.585000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 19:25:20.589000 audit[1583]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.589000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff279894a0 a2=0 a3=7fff2798948c items=0 ppid=1501 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.589000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:25:20.592000 audit[1585]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.592000 audit[1585]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc438f81b0 a2=0 a3=7ffc438f819c items=0 ppid=1501 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.592000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:25:20.595000 audit[1587]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.595000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc1ba825d0 a2=0 a3=7ffc1ba825bc items=0 ppid=1501 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.595000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 19:25:20.597134 systemd-networkd[1083]: docker0: Link UP Feb 9 19:25:20.610000 audit[1591]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.610000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe3aee1fd0 a2=0 a3=7ffe3aee1fbc items=0 ppid=1501 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.610000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:20.611000 audit[1592]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:20.611000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd194d6be0 a2=0 a3=7ffd194d6bcc items=0 ppid=1501 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:20.611000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:25:20.613731 env[1501]: time="2024-02-09T19:25:20.613672959Z" level=info msg="Loading containers: done." Feb 9 19:25:20.636021 env[1501]: time="2024-02-09T19:25:20.635938463Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:25:20.636276 env[1501]: time="2024-02-09T19:25:20.636226414Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:25:20.636412 env[1501]: time="2024-02-09T19:25:20.636393689Z" level=info msg="Daemon has completed initialization" Feb 9 19:25:20.658276 systemd[1]: Started docker.service. Feb 9 19:25:20.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:20.670870 env[1501]: time="2024-02-09T19:25:20.670792129Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:25:20.695878 systemd[1]: Reloading. Feb 9 19:25:20.807229 /usr/lib/systemd/system-generators/torcx-generator[1642]: time="2024-02-09T19:25:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:25:20.810335 /usr/lib/systemd/system-generators/torcx-generator[1642]: time="2024-02-09T19:25:20Z" level=info msg="torcx already run" Feb 9 19:25:20.912778 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:25:20.912806 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:25:20.937011 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:25:21.047927 systemd[1]: Started kubelet.service. Feb 9 19:25:21.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:21.138798 kubelet[1688]: E0209 19:25:21.138407 1688 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:25:21.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:25:21.141141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:25:21.141465 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:25:21.723879 env[1221]: time="2024-02-09T19:25:21.723792825Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:25:22.184113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611648257.mount: Deactivated successfully. Feb 9 19:25:24.310647 env[1221]: time="2024-02-09T19:25:24.310569565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:24.314633 env[1221]: time="2024-02-09T19:25:24.314577237Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:24.317593 env[1221]: time="2024-02-09T19:25:24.317546100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:24.320360 env[1221]: time="2024-02-09T19:25:24.320315157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:24.321423 env[1221]: time="2024-02-09T19:25:24.321340058Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:25:24.336405 env[1221]: time="2024-02-09T19:25:24.336357009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:25:26.354184 env[1221]: time="2024-02-09T19:25:26.354082639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:26.357778 env[1221]: time="2024-02-09T19:25:26.357697665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:26.361485 env[1221]: time="2024-02-09T19:25:26.361438539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:26.364809 env[1221]: time="2024-02-09T19:25:26.364743748Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:26.366585 env[1221]: time="2024-02-09T19:25:26.366509648Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:25:26.384071 env[1221]: time="2024-02-09T19:25:26.384019963Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:25:27.582026 env[1221]: time="2024-02-09T19:25:27.581924289Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:27.586023 env[1221]: time="2024-02-09T19:25:27.585960166Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:27.589896 env[1221]: time="2024-02-09T19:25:27.589821165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:27.593398 env[1221]: time="2024-02-09T19:25:27.593340225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:27.596381 env[1221]: time="2024-02-09T19:25:27.596307969Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:25:27.613340 env[1221]: time="2024-02-09T19:25:27.613268906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:25:28.681768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445230562.mount: Deactivated successfully. Feb 9 19:25:29.272465 env[1221]: time="2024-02-09T19:25:29.272374673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:29.276496 env[1221]: time="2024-02-09T19:25:29.276426803Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:29.279682 env[1221]: time="2024-02-09T19:25:29.279607131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:29.286329 env[1221]: time="2024-02-09T19:25:29.286242400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:25:29.289341 env[1221]: time="2024-02-09T19:25:29.289253045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:29.303845 env[1221]: time="2024-02-09T19:25:29.303796302Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:25:29.736397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount15646229.mount: Deactivated successfully. Feb 9 19:25:29.746796 env[1221]: time="2024-02-09T19:25:29.746723656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:29.750224 env[1221]: time="2024-02-09T19:25:29.750146239Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:29.752859 env[1221]: time="2024-02-09T19:25:29.752816287Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:29.755637 env[1221]: time="2024-02-09T19:25:29.755594527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:29.756629 env[1221]: time="2024-02-09T19:25:29.756585205Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:25:29.774102 env[1221]: time="2024-02-09T19:25:29.774041470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:25:30.559687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4090637029.mount: Deactivated successfully. Feb 9 19:25:30.946801 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 9 19:25:30.947012 kernel: audit: type=1131 audit(1707506730.915:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:30.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:30.916334 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:25:31.166722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:25:31.167057 systemd[1]: Stopped kubelet.service. Feb 9 19:25:31.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:31.169975 systemd[1]: Started kubelet.service. Feb 9 19:25:31.190329 kernel: audit: type=1130 audit(1707506731.165:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:31.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:31.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:31.236932 kernel: audit: type=1131 audit(1707506731.165:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:31.237091 kernel: audit: type=1130 audit(1707506731.168:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:31.273918 kubelet[1736]: E0209 19:25:31.273855 1736 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:25:31.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:25:31.280195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:25:31.280498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:25:31.304320 kernel: audit: type=1131 audit(1707506731.279:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:25:35.128360 env[1221]: time="2024-02-09T19:25:35.128278092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:35.205026 env[1221]: time="2024-02-09T19:25:35.204966662Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:35.230155 env[1221]: time="2024-02-09T19:25:35.230096432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:35.234013 env[1221]: time="2024-02-09T19:25:35.233958947Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:35.235436 env[1221]: time="2024-02-09T19:25:35.235391234Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:25:35.250671 env[1221]: time="2024-02-09T19:25:35.250622536Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:25:35.814867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549283903.mount: Deactivated successfully. Feb 9 19:25:36.602954 env[1221]: time="2024-02-09T19:25:36.602879505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:36.997380 env[1221]: time="2024-02-09T19:25:36.997322743Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:37.391111 env[1221]: time="2024-02-09T19:25:37.390700165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:37.590523 env[1221]: time="2024-02-09T19:25:37.590341535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:37.591161 env[1221]: time="2024-02-09T19:25:37.591110480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:25:41.035879 systemd[1]: Stopped kubelet.service. Feb 9 19:25:41.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:41.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:41.072678 systemd[1]: Reloading. Feb 9 19:25:41.081176 kernel: audit: type=1130 audit(1707506741.035:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:41.081355 kernel: audit: type=1131 audit(1707506741.039:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:41.192188 /usr/lib/systemd/system-generators/torcx-generator[1829]: time="2024-02-09T19:25:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:25:41.205253 /usr/lib/systemd/system-generators/torcx-generator[1829]: time="2024-02-09T19:25:41Z" level=info msg="torcx already run" Feb 9 19:25:41.284135 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:25:41.284165 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:25:41.308784 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:25:41.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:41.427286 systemd[1]: Started kubelet.service. Feb 9 19:25:41.450322 kernel: audit: type=1130 audit(1707506741.426:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:41.517361 kubelet[1876]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:25:41.517852 kubelet[1876]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:25:41.518080 kubelet[1876]: I0209 19:25:41.518028 1876 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:25:41.522607 kubelet[1876]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:25:41.522749 kubelet[1876]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:25:42.146894 kubelet[1876]: I0209 19:25:42.146844 1876 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:25:42.146894 kubelet[1876]: I0209 19:25:42.146877 1876 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:25:42.147236 kubelet[1876]: I0209 19:25:42.147198 1876 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:25:42.151028 kubelet[1876]: E0209 19:25:42.151002 1876 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.151244 kubelet[1876]: I0209 19:25:42.151224 1876 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:25:42.156885 kubelet[1876]: I0209 19:25:42.156851 1876 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:25:42.157483 kubelet[1876]: I0209 19:25:42.157444 1876 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:25:42.157597 kubelet[1876]: I0209 19:25:42.157541 1876 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:25:42.157597 kubelet[1876]: I0209 19:25:42.157586 1876 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:25:42.157824 kubelet[1876]: I0209 19:25:42.157606 1876 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:25:42.157824 kubelet[1876]: I0209 19:25:42.157760 1876 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:25:42.164569 kubelet[1876]: I0209 19:25:42.164543 1876 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:25:42.164692 kubelet[1876]: I0209 19:25:42.164580 1876 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:25:42.164692 kubelet[1876]: I0209 19:25:42.164619 1876 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:25:42.164692 kubelet[1876]: I0209 19:25:42.164644 1876 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:25:42.167222 kubelet[1876]: W0209 19:25:42.167159 1876 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.167389 kubelet[1876]: E0209 19:25:42.167245 1876 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.167389 kubelet[1876]: I0209 19:25:42.167374 1876 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:25:42.169529 kubelet[1876]: W0209 19:25:42.169474 1876 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.169637 kubelet[1876]: E0209 19:25:42.169541 1876 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.170804 kubelet[1876]: W0209 19:25:42.170778 1876 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:25:42.171386 kubelet[1876]: I0209 19:25:42.171364 1876 server.go:1186] "Started kubelet" Feb 9 19:25:42.172000 audit[1876]: AVC avc: denied { mac_admin } for pid=1876 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:42.174480 kubelet[1876]: E0209 19:25:42.174451 1876 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:25:42.174618 kubelet[1876]: E0209 19:25:42.174605 1876 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:25:42.175045 kubelet[1876]: E0209 19:25:42.174935 1876 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d2b4f29e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 171333278, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 171333278, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.128.0.66:6443/api/v1/namespaces/default/events": dial tcp 10.128.0.66:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:25:42.176754 kubelet[1876]: I0209 19:25:42.176723 1876 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:25:42.177752 kubelet[1876]: I0209 19:25:42.177733 1876 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:25:42.179408 kubelet[1876]: I0209 19:25:42.179386 1876 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:25:42.179587 kubelet[1876]: I0209 19:25:42.179572 1876 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:25:42.179775 kubelet[1876]: I0209 19:25:42.179762 1876 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:25:42.187037 kubelet[1876]: I0209 19:25:42.187009 1876 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:25:42.189427 kubelet[1876]: I0209 19:25:42.189404 1876 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:25:42.196433 kernel: audit: type=1400 audit(1707506742.172:187): avc: denied { mac_admin } for pid=1876 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:42.196548 kernel: audit: type=1401 audit(1707506742.172:187): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:42.172000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:42.196680 kubelet[1876]: W0209 19:25:42.195452 1876 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.196680 kubelet[1876]: E0209 19:25:42.195535 1876 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.196680 kubelet[1876]: E0209 19:25:42.195646 1876 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.128.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal?timeout=10s": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.172000 audit[1876]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009b9710 a1=c000688f30 a2=c0009b96e0 a3=25 items=0 ppid=1 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.240321 kernel: audit: type=1300 audit(1707506742.172:187): arch=c000003e syscall=188 success=no exit=-22 a0=c0009b9710 a1=c000688f30 a2=c0009b96e0 a3=25 items=0 ppid=1 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.172000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:25:42.271323 kernel: audit: type=1327 audit(1707506742.172:187): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:25:42.178000 audit[1876]: AVC avc: denied { mac_admin } for pid=1876 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:42.287110 kubelet[1876]: I0209 19:25:42.287083 1876 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:25:42.287360 kubelet[1876]: I0209 19:25:42.287345 1876 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:25:42.287467 kubelet[1876]: I0209 19:25:42.287455 1876 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:25:42.290773 kubelet[1876]: I0209 19:25:42.290747 1876 policy_none.go:49] "None policy: Start" Feb 9 19:25:42.178000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:42.293785 kubelet[1876]: I0209 19:25:42.293767 1876 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:25:42.293939 kubelet[1876]: I0209 19:25:42.293926 1876 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:25:42.304757 kernel: audit: type=1400 audit(1707506742.178:188): avc: denied { mac_admin } for pid=1876 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:42.304907 kernel: audit: type=1401 audit(1707506742.178:188): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:42.305729 kubelet[1876]: I0209 19:25:42.305694 1876 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.306610 kubelet[1876]: E0209 19:25:42.306588 1876 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.66:6443/api/v1/nodes\": dial tcp 10.128.0.66:6443: connect: connection refused" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.340068 kernel: audit: type=1300 audit(1707506742.178:188): arch=c000003e syscall=188 success=no exit=-22 a0=c000d62ee0 a1=c000688f48 a2=c0009b97a0 a3=25 items=0 ppid=1 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.178000 audit[1876]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d62ee0 a1=c000688f48 a2=c0009b97a0 a3=25 items=0 ppid=1 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.178000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:25:42.182000 audit[1886]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.182000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc5a423860 a2=0 a3=7ffc5a42384c items=0 ppid=1876 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.182000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:25:42.187000 audit[1887]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.187000 audit[1887]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff44776050 a2=0 a3=7fff4477603c items=0 ppid=1876 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:25:42.191000 audit[1889]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.191000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd038b76d0 a2=0 a3=7ffd038b76bc items=0 ppid=1876 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:25:42.196000 audit[1891]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.196000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe52e508f0 a2=0 a3=7ffe52e508dc items=0 ppid=1876 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.196000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:25:42.351724 kubelet[1876]: I0209 19:25:42.351694 1876 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:25:42.352000 audit[1876]: AVC avc: denied { mac_admin } for pid=1876 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:42.352000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:42.352000 audit[1876]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000eac3c0 a1=c0009ee420 a2=c000eac390 a3=25 items=0 ppid=1 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.352000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:25:42.353909 kubelet[1876]: I0209 19:25:42.353556 1876 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:25:42.353909 kubelet[1876]: I0209 19:25:42.353799 1876 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:25:42.354768 kubelet[1876]: E0209 19:25:42.354747 1876 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" not found" Feb 9 19:25:42.355000 audit[1899]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.355000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdd1b53350 a2=0 a3=7ffdd1b5333c items=0 ppid=1876 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:25:42.357000 audit[1900]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.357000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcfa6831a0 a2=0 a3=7ffcfa68318c items=0 ppid=1876 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.357000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:25:42.366000 audit[1903]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.366000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd8b6a3f50 a2=0 a3=7ffd8b6a3f3c items=0 ppid=1876 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:25:42.372000 audit[1906]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.372000 audit[1906]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc4483eb40 a2=0 a3=7ffc4483eb2c items=0 ppid=1876 pid=1906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.372000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:25:42.373000 audit[1907]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.373000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc6438e9f0 a2=0 a3=7ffc6438e9dc items=0 ppid=1876 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.373000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:25:42.375000 audit[1908]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.375000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2683b550 a2=0 a3=7ffd2683b53c items=0 ppid=1876 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.375000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:25:42.378000 audit[1910]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.378000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe09b0abf0 a2=0 a3=7ffe09b0abdc items=0 ppid=1876 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:25:42.381000 audit[1912]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.381000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdccf117d0 a2=0 a3=7ffdccf117bc items=0 ppid=1876 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:25:42.384000 audit[1914]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.384000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd8afdaf20 a2=0 a3=7ffd8afdaf0c items=0 ppid=1876 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:25:42.388000 audit[1916]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.388000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffec75ffa00 a2=0 a3=7ffec75ff9ec items=0 ppid=1876 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:25:42.392000 audit[1918]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.392000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe2632bb30 a2=0 a3=7ffe2632bb1c items=0 ppid=1876 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:25:42.394382 kubelet[1876]: I0209 19:25:42.394355 1876 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:25:42.394000 audit[1919]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.394000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe3f0ccd90 a2=0 a3=7ffe3f0ccd7c items=0 ppid=1876 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:25:42.394000 audit[1920]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.394000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff4358290 a2=0 a3=7ffff435827c items=0 ppid=1876 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:25:42.397000 audit[1921]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.397000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeef9de660 a2=0 a3=7ffeef9de64c items=0 ppid=1876 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:25:42.399797 kubelet[1876]: E0209 19:25:42.397070 1876 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.128.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal?timeout=10s": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.400000 audit[1922]: NETFILTER_CFG table=nat:44 family=10 entries=2 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.400000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe57cdf5f0 a2=0 a3=7ffe57cdf5dc items=0 ppid=1876 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.400000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:25:42.401000 audit[1923]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:25:42.401000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4a312480 a2=0 a3=7ffc4a31246c items=0 ppid=1876 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.401000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:25:42.404000 audit[1925]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.404000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff3018ac00 a2=0 a3=7fff3018abec items=0 ppid=1876 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:25:42.406000 audit[1926]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.406000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffee5101600 a2=0 a3=7ffee51015ec items=0 ppid=1876 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:25:42.409000 audit[1928]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.409000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc1f8faba0 a2=0 a3=7ffc1f8fab8c items=0 ppid=1876 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:25:42.411000 audit[1929]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.411000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcaa8503f0 a2=0 a3=7ffcaa8503dc items=0 ppid=1876 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:25:42.412000 audit[1930]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.412000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7ea4a760 a2=0 a3=7ffd7ea4a74c items=0 ppid=1876 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:25:42.415000 audit[1932]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.415000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffa9430960 a2=0 a3=7fffa943094c items=0 ppid=1876 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.415000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:25:42.419000 audit[1934]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.419000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff8526ef30 a2=0 a3=7fff8526ef1c items=0 ppid=1876 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:25:42.422000 audit[1936]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.422000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffccaf42e60 a2=0 a3=7ffccaf42e4c items=0 ppid=1876 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:25:42.425000 audit[1938]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.425000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffcfe2e2ec0 a2=0 a3=7ffcfe2e2eac items=0 ppid=1876 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:25:42.430000 audit[1940]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.430000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffff69a4b90 a2=0 a3=7ffff69a4b7c items=0 ppid=1876 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:25:42.432996 kubelet[1876]: I0209 19:25:42.432973 1876 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:25:42.433141 kubelet[1876]: I0209 19:25:42.433127 1876 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:25:42.433279 kubelet[1876]: I0209 19:25:42.433261 1876 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:25:42.433477 kubelet[1876]: E0209 19:25:42.433460 1876 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:25:42.433000 audit[1941]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.433000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd30e0ded0 a2=0 a3=7ffd30e0debc items=0 ppid=1876 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:25:42.435171 kubelet[1876]: W0209 19:25:42.435142 1876 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.435334 kubelet[1876]: E0209 19:25:42.435317 1876 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.435000 audit[1942]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.435000 audit[1942]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2d4b0910 a2=0 a3=7ffc2d4b08fc items=0 ppid=1876 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:25:42.436000 audit[1943]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:25:42.436000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6f0dfd00 a2=0 a3=7ffc6f0dfcec items=0 ppid=1876 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:42.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:25:42.513410 kubelet[1876]: I0209 19:25:42.513362 1876 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.513756 kubelet[1876]: E0209 19:25:42.513733 1876 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.66:6443/api/v1/nodes\": dial tcp 10.128.0.66:6443: connect: connection refused" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.534048 kubelet[1876]: I0209 19:25:42.533988 1876 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:25:42.541426 kubelet[1876]: I0209 19:25:42.541396 1876 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:25:42.546254 kubelet[1876]: I0209 19:25:42.546227 1876 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:25:42.555248 kubelet[1876]: I0209 19:25:42.555219 1876 status_manager.go:698] "Failed to get status for pod" podUID=9a04624a2b056f7ac7821a08cbcbf416 pod="kube-system/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" err="Get \"https://10.128.0.66:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\": dial tcp 10.128.0.66:6443: connect: connection refused" Feb 9 19:25:42.563933 kubelet[1876]: I0209 19:25:42.563904 1876 status_manager.go:698] "Failed to get status for pod" podUID=785017324691e7c30ab3d931bb41aaf4 pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" err="Get \"https://10.128.0.66:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\": dial tcp 10.128.0.66:6443: connect: connection refused" Feb 9 19:25:42.567403 kubelet[1876]: I0209 19:25:42.567378 1876 status_manager.go:698] "Failed to get status for pod" podUID=b3e69a1beafcbb94fef931534e28fb8d pod="kube-system/kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" err="Get \"https://10.128.0.66:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\": dial tcp 10.128.0.66:6443: connect: connection refused" Feb 9 19:25:42.593950 kubelet[1876]: I0209 19:25:42.593892 1876 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3e69a1beafcbb94fef931534e28fb8d-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"b3e69a1beafcbb94fef931534e28fb8d\") " pod="kube-system/kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.594286 kubelet[1876]: I0209 19:25:42.594263 1876 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a04624a2b056f7ac7821a08cbcbf416-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"9a04624a2b056f7ac7821a08cbcbf416\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.594502 kubelet[1876]: I0209 19:25:42.594480 1876 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a04624a2b056f7ac7821a08cbcbf416-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"9a04624a2b056f7ac7821a08cbcbf416\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.594601 kubelet[1876]: I0209 19:25:42.594547 1876 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.594601 kubelet[1876]: I0209 19:25:42.594596 1876 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.594751 kubelet[1876]: I0209 19:25:42.594635 1876 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a04624a2b056f7ac7821a08cbcbf416-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"9a04624a2b056f7ac7821a08cbcbf416\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.594751 kubelet[1876]: I0209 19:25:42.594682 1876 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.594751 kubelet[1876]: I0209 19:25:42.594729 1876 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.594904 kubelet[1876]: I0209 19:25:42.594774 1876 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.797848 kubelet[1876]: E0209 19:25:42.797789 1876 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.128.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal?timeout=10s": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.857100 env[1221]: time="2024-02-09T19:25:42.857014378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal,Uid:9a04624a2b056f7ac7821a08cbcbf416,Namespace:kube-system,Attempt:0,}" Feb 9 19:25:42.859418 env[1221]: time="2024-02-09T19:25:42.859372914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal,Uid:785017324691e7c30ab3d931bb41aaf4,Namespace:kube-system,Attempt:0,}" Feb 9 19:25:42.869693 env[1221]: time="2024-02-09T19:25:42.869634114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal,Uid:b3e69a1beafcbb94fef931534e28fb8d,Namespace:kube-system,Attempt:0,}" Feb 9 19:25:42.922635 kubelet[1876]: I0209 19:25:42.922050 1876 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.922635 kubelet[1876]: E0209 19:25:42.922541 1876 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.66:6443/api/v1/nodes\": dial tcp 10.128.0.66:6443: connect: connection refused" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:42.970947 kubelet[1876]: W0209 19:25:42.970858 1876 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:42.970947 kubelet[1876]: E0209 19:25:42.970949 1876 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:43.082693 kubelet[1876]: W0209 19:25:43.082451 1876 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:43.082693 kubelet[1876]: E0209 19:25:43.082533 1876 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:43.339266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731171732.mount: Deactivated successfully. Feb 9 19:25:43.350083 env[1221]: time="2024-02-09T19:25:43.350022311Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.351902 env[1221]: time="2024-02-09T19:25:43.351855584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.356946 env[1221]: time="2024-02-09T19:25:43.356898473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.359397 env[1221]: time="2024-02-09T19:25:43.359347201Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.360508 env[1221]: time="2024-02-09T19:25:43.360468495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.363715 env[1221]: time="2024-02-09T19:25:43.363668411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.365527 env[1221]: time="2024-02-09T19:25:43.365462710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.366758 env[1221]: time="2024-02-09T19:25:43.366722754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.368662 env[1221]: time="2024-02-09T19:25:43.368613706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.370781 env[1221]: time="2024-02-09T19:25:43.370704832Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.373402 env[1221]: time="2024-02-09T19:25:43.373330343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.374358 env[1221]: time="2024-02-09T19:25:43.374278359Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:25:43.413177 kubelet[1876]: W0209 19:25:43.413078 1876 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:43.413177 kubelet[1876]: E0209 19:25:43.413175 1876 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:43.414633 env[1221]: time="2024-02-09T19:25:43.414531579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:25:43.414845 env[1221]: time="2024-02-09T19:25:43.414608248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:25:43.414845 env[1221]: time="2024-02-09T19:25:43.414627111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:25:43.422874 env[1221]: time="2024-02-09T19:25:43.415327635Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afe01577802c209711c7bccbbec8e65d7e7f771a47c9d4b1a389eb86de1af0bf pid=1952 runtime=io.containerd.runc.v2 Feb 9 19:25:43.451148 env[1221]: time="2024-02-09T19:25:43.451005815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:25:43.451148 env[1221]: time="2024-02-09T19:25:43.451136231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:25:43.451472 env[1221]: time="2024-02-09T19:25:43.451179509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:25:43.451472 env[1221]: time="2024-02-09T19:25:43.451401548Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f5a7cc32304756516d4394919a088c8ddce6123fe100630779cfdb640d1ea73 pid=1980 runtime=io.containerd.runc.v2 Feb 9 19:25:43.455515 env[1221]: time="2024-02-09T19:25:43.455337183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:25:43.455773 env[1221]: time="2024-02-09T19:25:43.455721772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:25:43.455962 env[1221]: time="2024-02-09T19:25:43.455915428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:25:43.456588 env[1221]: time="2024-02-09T19:25:43.456521713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ed1255c9d4e8119c557c08ddced541880dcbc4aff976b726c1d9f84089db3c9 pid=1979 runtime=io.containerd.runc.v2 Feb 9 19:25:43.535735 kubelet[1876]: W0209 19:25:43.535650 1876 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:43.535735 kubelet[1876]: E0209 19:25:43.535734 1876 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:43.575652 env[1221]: time="2024-02-09T19:25:43.575578032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal,Uid:9a04624a2b056f7ac7821a08cbcbf416,Namespace:kube-system,Attempt:0,} returns sandbox id \"afe01577802c209711c7bccbbec8e65d7e7f771a47c9d4b1a389eb86de1af0bf\"" Feb 9 19:25:43.577894 kubelet[1876]: E0209 19:25:43.577867 1876 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-21291" Feb 9 19:25:43.580817 env[1221]: time="2024-02-09T19:25:43.580770672Z" level=info msg="CreateContainer within sandbox \"afe01577802c209711c7bccbbec8e65d7e7f771a47c9d4b1a389eb86de1af0bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:25:43.600842 kubelet[1876]: E0209 19:25:43.599373 1876 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.128.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal?timeout=10s": dial tcp 10.128.0.66:6443: connect: connection refused Feb 9 19:25:43.633384 env[1221]: time="2024-02-09T19:25:43.633329893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal,Uid:785017324691e7c30ab3d931bb41aaf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ed1255c9d4e8119c557c08ddced541880dcbc4aff976b726c1d9f84089db3c9\"" Feb 9 19:25:43.634978 kubelet[1876]: E0209 19:25:43.634950 1876 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flat" Feb 9 19:25:43.636279 env[1221]: time="2024-02-09T19:25:43.636232852Z" level=info msg="CreateContainer within sandbox \"afe01577802c209711c7bccbbec8e65d7e7f771a47c9d4b1a389eb86de1af0bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2e662bfac909ec055538a315a6562812a22bf2e0b43c807e57f8b76abae6f41f\"" Feb 9 19:25:43.637984 env[1221]: time="2024-02-09T19:25:43.637947495Z" level=info msg="StartContainer for \"2e662bfac909ec055538a315a6562812a22bf2e0b43c807e57f8b76abae6f41f\"" Feb 9 19:25:43.638223 env[1221]: time="2024-02-09T19:25:43.638180187Z" level=info msg="CreateContainer within sandbox \"3ed1255c9d4e8119c557c08ddced541880dcbc4aff976b726c1d9f84089db3c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:25:43.648912 env[1221]: time="2024-02-09T19:25:43.648849217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal,Uid:b3e69a1beafcbb94fef931534e28fb8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f5a7cc32304756516d4394919a088c8ddce6123fe100630779cfdb640d1ea73\"" Feb 9 19:25:43.651419 kubelet[1876]: E0209 19:25:43.651274 1876 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-21291" Feb 9 19:25:43.655664 env[1221]: time="2024-02-09T19:25:43.655615538Z" level=info msg="CreateContainer within sandbox \"3f5a7cc32304756516d4394919a088c8ddce6123fe100630779cfdb640d1ea73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:25:43.662936 env[1221]: time="2024-02-09T19:25:43.662880812Z" level=info msg="CreateContainer within sandbox \"3ed1255c9d4e8119c557c08ddced541880dcbc4aff976b726c1d9f84089db3c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"86b29a8baa54b3d8b20f38e5630e66aece87ca0ec704e7e0cfeed9e7ad4fd056\"" Feb 9 19:25:43.664361 env[1221]: time="2024-02-09T19:25:43.664324634Z" level=info msg="StartContainer for \"86b29a8baa54b3d8b20f38e5630e66aece87ca0ec704e7e0cfeed9e7ad4fd056\"" Feb 9 19:25:43.688453 env[1221]: time="2024-02-09T19:25:43.688357545Z" level=info msg="CreateContainer within sandbox \"3f5a7cc32304756516d4394919a088c8ddce6123fe100630779cfdb640d1ea73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"63128736f494c40ce185ae8b4cd26bd1dbc845c9ddf4ee1ded7f11c4cddb4c0f\"" Feb 9 19:25:43.689579 env[1221]: time="2024-02-09T19:25:43.689511632Z" level=info msg="StartContainer for \"63128736f494c40ce185ae8b4cd26bd1dbc845c9ddf4ee1ded7f11c4cddb4c0f\"" Feb 9 19:25:43.740442 kubelet[1876]: I0209 19:25:43.739975 1876 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:43.740442 kubelet[1876]: E0209 19:25:43.740393 1876 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.66:6443/api/v1/nodes\": dial tcp 10.128.0.66:6443: connect: connection refused" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:43.810499 env[1221]: time="2024-02-09T19:25:43.810442058Z" level=info msg="StartContainer for \"2e662bfac909ec055538a315a6562812a22bf2e0b43c807e57f8b76abae6f41f\" returns successfully" Feb 9 19:25:43.844590 env[1221]: time="2024-02-09T19:25:43.844534243Z" level=info msg="StartContainer for \"86b29a8baa54b3d8b20f38e5630e66aece87ca0ec704e7e0cfeed9e7ad4fd056\" returns successfully" Feb 9 19:25:44.066536 env[1221]: time="2024-02-09T19:25:44.066480943Z" level=info msg="StartContainer for \"63128736f494c40ce185ae8b4cd26bd1dbc845c9ddf4ee1ded7f11c4cddb4c0f\" returns successfully" Feb 9 19:25:44.746122 update_engine[1202]: I0209 19:25:44.745352 1202 update_attempter.cc:509] Updating boot flags... Feb 9 19:25:45.345207 kubelet[1876]: I0209 19:25:45.345169 1876 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:47.975029 kubelet[1876]: E0209 19:25:47.974981 1876 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" not found" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:48.064370 kubelet[1876]: I0209 19:25:48.064327 1876 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:48.111681 kubelet[1876]: E0209 19:25:48.111539 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d2b4f29e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 171333278, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 171333278, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:48.166179 kubelet[1876]: E0209 19:25:48.166049 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d2e69e67", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 174588519, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 174588519, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:48.170614 kubelet[1876]: I0209 19:25:48.170565 1876 apiserver.go:52] "Watching apiserver" Feb 9 19:25:48.190433 kubelet[1876]: I0209 19:25:48.190384 1876 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:25:48.220500 kubelet[1876]: E0209 19:25:48.220367 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d9910cb6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286421174, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286421174, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:48.237805 kubelet[1876]: I0209 19:25:48.237655 1876 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:25:48.275988 kubelet[1876]: E0209 19:25:48.275847 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d9913083", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286430339, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286430339, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:48.329976 kubelet[1876]: E0209 19:25:48.329851 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d9914577", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286435703, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286435703, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:48.386588 kubelet[1876]: E0209 19:25:48.386437 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d9910cb6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286421174, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 305611346, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:48.444096 kubelet[1876]: E0209 19:25:48.443919 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d9913083", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286430339, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 305621423, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:48.501493 kubelet[1876]: E0209 19:25:48.501060 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d9914577", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286435703, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 305626586, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:48.556176 kubelet[1876]: E0209 19:25:48.556053 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855dd903ad6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 353476310, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 353476310, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:48.767103 kubelet[1876]: E0209 19:25:48.766850 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d9910cb6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286421174, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 513303311, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:49.167624 kubelet[1876]: E0209 19:25:49.167506 1876 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal.17b24855d9913083", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", UID:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 286430339, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 25, 42, 513318124, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:25:50.581161 systemd[1]: Reloading. Feb 9 19:25:50.690888 /usr/lib/systemd/system-generators/torcx-generator[2216]: time="2024-02-09T19:25:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:25:50.690934 /usr/lib/systemd/system-generators/torcx-generator[2216]: time="2024-02-09T19:25:50Z" level=info msg="torcx already run" Feb 9 19:25:50.805995 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:25:50.806022 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:25:50.832355 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:25:50.974233 kubelet[1876]: I0209 19:25:50.974195 1876 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:25:50.975653 systemd[1]: Stopping kubelet.service... Feb 9 19:25:50.993073 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:25:50.993611 systemd[1]: Stopped kubelet.service. Feb 9 19:25:50.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:50.999179 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 19:25:50.999282 kernel: audit: type=1131 audit(1707506750.992:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:51.021806 systemd[1]: Started kubelet.service. Feb 9 19:25:51.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:51.053367 kernel: audit: type=1130 audit(1707506751.021:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:51.187903 kubelet[2267]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:25:51.187903 kubelet[2267]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:25:51.188541 kubelet[2267]: I0209 19:25:51.187991 2267 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:25:51.190474 kubelet[2267]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:25:51.190474 kubelet[2267]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:25:51.196074 kubelet[2267]: I0209 19:25:51.196036 2267 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:25:51.196074 kubelet[2267]: I0209 19:25:51.196076 2267 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:25:51.196407 kubelet[2267]: I0209 19:25:51.196377 2267 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:25:51.198607 kubelet[2267]: I0209 19:25:51.198575 2267 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:25:51.204513 kubelet[2267]: I0209 19:25:51.204486 2267 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:25:51.209796 kubelet[2267]: I0209 19:25:51.209758 2267 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:25:51.210448 kubelet[2267]: I0209 19:25:51.210424 2267 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:25:51.210542 kubelet[2267]: I0209 19:25:51.210533 2267 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:25:51.210707 kubelet[2267]: I0209 19:25:51.210563 2267 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:25:51.210707 kubelet[2267]: I0209 19:25:51.210582 2267 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:25:51.210707 kubelet[2267]: I0209 19:25:51.210635 2267 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:25:51.221136 kubelet[2267]: I0209 19:25:51.220347 2267 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:25:51.221136 kubelet[2267]: I0209 19:25:51.220377 2267 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:25:51.221136 kubelet[2267]: I0209 19:25:51.220410 2267 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:25:51.221136 kubelet[2267]: I0209 19:25:51.220433 2267 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:25:51.225498 kubelet[2267]: I0209 19:25:51.225471 2267 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:25:51.239623 kubelet[2267]: I0209 19:25:51.239594 2267 server.go:1186] "Started kubelet" Feb 9 19:25:51.240000 audit[2267]: AVC avc: denied { mac_admin } for pid=2267 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:51.265675 kubelet[2267]: I0209 19:25:51.242337 2267 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:25:51.265675 kubelet[2267]: I0209 19:25:51.242391 2267 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:25:51.265675 kubelet[2267]: I0209 19:25:51.242418 2267 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:25:51.265675 kubelet[2267]: I0209 19:25:51.247160 2267 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:25:51.265675 kubelet[2267]: I0209 19:25:51.248089 2267 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:25:51.266328 kernel: audit: type=1400 audit(1707506751.240:225): avc: denied { mac_admin } for pid=2267 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:51.278988 kubelet[2267]: I0209 19:25:51.278951 2267 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:25:51.240000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:51.291366 kernel: audit: type=1401 audit(1707506751.240:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:51.240000 audit[2267]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ec24e0 a1=c000707e30 a2=c000ec24b0 a3=25 items=0 ppid=1 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:51.331471 kernel: audit: type=1300 audit(1707506751.240:225): arch=c000003e syscall=188 success=no exit=-22 a0=c000ec24e0 a1=c000707e30 a2=c000ec24b0 a3=25 items=0 ppid=1 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:51.336822 kubelet[2267]: I0209 19:25:51.336787 2267 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:25:51.347259 kubelet[2267]: E0209 19:25:51.347228 2267 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:25:51.347538 kubelet[2267]: E0209 19:25:51.347521 2267 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:25:51.240000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:25:51.388310 kernel: audit: type=1327 audit(1707506751.240:225): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:25:51.397819 kubelet[2267]: E0209 19:25:51.397789 2267 container_manager_linux.go:945] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Feb 9 19:25:51.241000 audit[2267]: AVC avc: denied { mac_admin } for pid=2267 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:51.439309 kernel: audit: type=1400 audit(1707506751.241:226): avc: denied { mac_admin } for pid=2267 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:51.447786 kubelet[2267]: I0209 19:25:51.447755 2267 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.241000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:51.473339 kernel: audit: type=1401 audit(1707506751.241:226): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:51.513513 kernel: audit: type=1300 audit(1707506751.241:226): arch=c000003e syscall=188 success=no exit=-22 a0=c00098fee0 a1=c000707e48 a2=c000ec2570 a3=25 items=0 ppid=1 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:51.241000 audit[2267]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00098fee0 a1=c000707e48 a2=c000ec2570 a3=25 items=0 ppid=1 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:51.514918 kubelet[2267]: I0209 19:25:51.514886 2267 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.515844 kubelet[2267]: I0209 19:25:51.515815 2267 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.241000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:25:51.572339 kernel: audit: type=1327 audit(1707506751.241:226): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:25:51.582698 kubelet[2267]: I0209 19:25:51.582662 2267 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:25:51.698916 kubelet[2267]: I0209 19:25:51.698793 2267 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:25:51.699099 kubelet[2267]: I0209 19:25:51.699082 2267 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:25:51.699490 kubelet[2267]: I0209 19:25:51.699474 2267 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:25:51.699780 kubelet[2267]: I0209 19:25:51.699765 2267 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:25:51.699895 kubelet[2267]: I0209 19:25:51.699882 2267 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:25:51.699981 kubelet[2267]: I0209 19:25:51.699971 2267 policy_none.go:49] "None policy: Start" Feb 9 19:25:51.705928 kubelet[2267]: I0209 19:25:51.705903 2267 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:25:51.706112 kubelet[2267]: I0209 19:25:51.706096 2267 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:25:51.706470 kubelet[2267]: I0209 19:25:51.706443 2267 state_mem.go:75] "Updated machine memory state" Feb 9 19:25:51.709249 kubelet[2267]: I0209 19:25:51.709220 2267 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:25:51.708000 audit[2267]: AVC avc: denied { mac_admin } for pid=2267 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:25:51.708000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:25:51.708000 audit[2267]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ff17d0 a1=c0012a1488 a2=c000ff17a0 a3=25 items=0 ppid=1 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:51.708000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:25:51.709907 kubelet[2267]: I0209 19:25:51.709883 2267 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:25:51.713100 kubelet[2267]: I0209 19:25:51.713078 2267 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:25:51.714636 kubelet[2267]: I0209 19:25:51.714606 2267 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:25:51.714836 kubelet[2267]: I0209 19:25:51.714821 2267 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:25:51.714984 kubelet[2267]: I0209 19:25:51.714968 2267 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:25:51.715262 kubelet[2267]: E0209 19:25:51.715238 2267 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:25:51.816504 kubelet[2267]: I0209 19:25:51.816455 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:25:51.816828 kubelet[2267]: I0209 19:25:51.816810 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:25:51.819034 kubelet[2267]: I0209 19:25:51.819008 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:25:51.844007 kubelet[2267]: I0209 19:25:51.843974 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.844350 kubelet[2267]: I0209 19:25:51.844332 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3e69a1beafcbb94fef931534e28fb8d-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"b3e69a1beafcbb94fef931534e28fb8d\") " pod="kube-system/kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.844575 kubelet[2267]: I0209 19:25:51.844551 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a04624a2b056f7ac7821a08cbcbf416-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"9a04624a2b056f7ac7821a08cbcbf416\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.844745 kubelet[2267]: I0209 19:25:51.844733 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.844914 kubelet[2267]: I0209 19:25:51.844901 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.845077 kubelet[2267]: I0209 19:25:51.845065 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.845248 kubelet[2267]: I0209 19:25:51.845236 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/785017324691e7c30ab3d931bb41aaf4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"785017324691e7c30ab3d931bb41aaf4\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.845438 kubelet[2267]: I0209 19:25:51.845424 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a04624a2b056f7ac7821a08cbcbf416-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"9a04624a2b056f7ac7821a08cbcbf416\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:51.845599 kubelet[2267]: I0209 19:25:51.845587 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a04624a2b056f7ac7821a08cbcbf416-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" (UID: \"9a04624a2b056f7ac7821a08cbcbf416\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:52.235280 kubelet[2267]: I0209 19:25:52.235230 2267 apiserver.go:52] "Watching apiserver" Feb 9 19:25:52.337972 kubelet[2267]: I0209 19:25:52.337924 2267 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:25:52.348183 kubelet[2267]: I0209 19:25:52.348138 2267 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:25:53.066890 kubelet[2267]: E0209 19:25:53.066854 2267 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:53.228969 kubelet[2267]: E0209 19:25:53.228914 2267 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:53.448882 kubelet[2267]: E0209 19:25:53.448851 2267 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:25:53.634571 kubelet[2267]: I0209 19:25:53.634521 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" podStartSLOduration=2.6343478449999997 pod.CreationTimestamp="2024-02-09 19:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:25:52.651126608 +0000 UTC m=+1.608804645" watchObservedRunningTime="2024-02-09 19:25:53.634347845 +0000 UTC m=+2.592025860" Feb 9 19:25:54.835165 kubelet[2267]: I0209 19:25:54.835112 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" podStartSLOduration=3.835057808 pod.CreationTimestamp="2024-02-09 19:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:25:54.476190951 +0000 UTC m=+3.433868990" watchObservedRunningTime="2024-02-09 19:25:54.835057808 +0000 UTC m=+3.792735833" Feb 9 19:25:54.835837 kubelet[2267]: I0209 19:25:54.835277 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" podStartSLOduration=3.835248751 pod.CreationTimestamp="2024-02-09 19:25:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:25:54.833867075 +0000 UTC m=+3.791545106" watchObservedRunningTime="2024-02-09 19:25:54.835248751 +0000 UTC m=+3.792926787" Feb 9 19:25:56.810474 sudo[1484]: pam_unix(sudo:session): session closed for user root Feb 9 19:25:56.809000 audit[1484]: USER_END pid=1484 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:56.815961 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 19:25:56.816057 kernel: audit: type=1106 audit(1707506756.809:228): pid=1484 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:56.815000 audit[1484]: CRED_DISP pid=1484 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:56.866762 kernel: audit: type=1104 audit(1707506756.815:229): pid=1484 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:25:56.888670 sshd[1480]: pam_unix(sshd:session): session closed for user core Feb 9 19:25:56.889000 audit[1480]: USER_END pid=1480 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:56.893669 systemd[1]: sshd@8-10.128.0.66:22-147.75.109.163:54098.service: Deactivated successfully. Feb 9 19:25:56.895094 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:25:56.899168 systemd-logind[1201]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:25:56.900946 systemd-logind[1201]: Removed session 9. Feb 9 19:25:56.889000 audit[1480]: CRED_DISP pid=1480 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:56.951343 kernel: audit: type=1106 audit(1707506756.889:230): pid=1480 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:56.951534 kernel: audit: type=1104 audit(1707506756.889:231): pid=1480 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:25:56.951578 kernel: audit: type=1131 audit(1707506756.892:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.66:22-147.75.109.163:54098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:56.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.66:22-147.75.109.163:54098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:03.847490 kubelet[2267]: I0209 19:26:03.847442 2267 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:26:03.848116 env[1221]: time="2024-02-09T19:26:03.848065220Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:26:03.848559 kubelet[2267]: I0209 19:26:03.848383 2267 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:26:04.521030 kubelet[2267]: I0209 19:26:04.520988 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:04.633095 kubelet[2267]: I0209 19:26:04.632998 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6e2c6f38-b028-449a-a033-d9818e47b396-kube-proxy\") pod \"kube-proxy-rwwsw\" (UID: \"6e2c6f38-b028-449a-a033-d9818e47b396\") " pod="kube-system/kube-proxy-rwwsw" Feb 9 19:26:04.633310 kubelet[2267]: I0209 19:26:04.633131 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e2c6f38-b028-449a-a033-d9818e47b396-lib-modules\") pod \"kube-proxy-rwwsw\" (UID: \"6e2c6f38-b028-449a-a033-d9818e47b396\") " pod="kube-system/kube-proxy-rwwsw" Feb 9 19:26:04.633310 kubelet[2267]: I0209 19:26:04.633193 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtzrh\" (UniqueName: \"kubernetes.io/projected/6e2c6f38-b028-449a-a033-d9818e47b396-kube-api-access-rtzrh\") pod \"kube-proxy-rwwsw\" (UID: \"6e2c6f38-b028-449a-a033-d9818e47b396\") " pod="kube-system/kube-proxy-rwwsw" Feb 9 19:26:04.633310 kubelet[2267]: I0209 19:26:04.633230 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e2c6f38-b028-449a-a033-d9818e47b396-xtables-lock\") pod \"kube-proxy-rwwsw\" (UID: \"6e2c6f38-b028-449a-a033-d9818e47b396\") " pod="kube-system/kube-proxy-rwwsw" Feb 9 19:26:04.829887 env[1221]: time="2024-02-09T19:26:04.829734234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwwsw,Uid:6e2c6f38-b028-449a-a033-d9818e47b396,Namespace:kube-system,Attempt:0,}" Feb 9 19:26:04.869368 env[1221]: time="2024-02-09T19:26:04.868248024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:04.869368 env[1221]: time="2024-02-09T19:26:04.868324139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:04.869368 env[1221]: time="2024-02-09T19:26:04.868353565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:04.873181 env[1221]: time="2024-02-09T19:26:04.873093160Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e084caba7c405cbe9ac7fee8249c9ce2bdf92792575fc009ecd8bfea2443fbd1 pid=2377 runtime=io.containerd.runc.v2 Feb 9 19:26:04.887336 kubelet[2267]: I0209 19:26:04.885446 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:04.912372 systemd[1]: run-containerd-runc-k8s.io-e084caba7c405cbe9ac7fee8249c9ce2bdf92792575fc009ecd8bfea2443fbd1-runc.gnNKVY.mount: Deactivated successfully. Feb 9 19:26:04.936872 kubelet[2267]: I0209 19:26:04.936832 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1b82a50d-23aa-4c2e-82c7-6765cc6ac8aa-var-lib-calico\") pod \"tigera-operator-cfc98749c-4ggh6\" (UID: \"1b82a50d-23aa-4c2e-82c7-6765cc6ac8aa\") " pod="tigera-operator/tigera-operator-cfc98749c-4ggh6" Feb 9 19:26:04.937073 kubelet[2267]: I0209 19:26:04.936912 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfk4f\" (UniqueName: \"kubernetes.io/projected/1b82a50d-23aa-4c2e-82c7-6765cc6ac8aa-kube-api-access-vfk4f\") pod \"tigera-operator-cfc98749c-4ggh6\" (UID: \"1b82a50d-23aa-4c2e-82c7-6765cc6ac8aa\") " pod="tigera-operator/tigera-operator-cfc98749c-4ggh6" Feb 9 19:26:04.958562 env[1221]: time="2024-02-09T19:26:04.958459882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwwsw,Uid:6e2c6f38-b028-449a-a033-d9818e47b396,Namespace:kube-system,Attempt:0,} returns sandbox id \"e084caba7c405cbe9ac7fee8249c9ce2bdf92792575fc009ecd8bfea2443fbd1\"" Feb 9 19:26:04.962754 env[1221]: time="2024-02-09T19:26:04.962702956Z" level=info msg="CreateContainer within sandbox \"e084caba7c405cbe9ac7fee8249c9ce2bdf92792575fc009ecd8bfea2443fbd1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:26:04.985589 env[1221]: time="2024-02-09T19:26:04.985528745Z" level=info msg="CreateContainer within sandbox \"e084caba7c405cbe9ac7fee8249c9ce2bdf92792575fc009ecd8bfea2443fbd1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"611088b1eeb90f4be564acf1bc29e63fddf0eb069b0cde4fae587f69898b20f6\"" Feb 9 19:26:04.988266 env[1221]: time="2024-02-09T19:26:04.986227391Z" level=info msg="StartContainer for \"611088b1eeb90f4be564acf1bc29e63fddf0eb069b0cde4fae587f69898b20f6\"" Feb 9 19:26:05.083648 env[1221]: time="2024-02-09T19:26:05.080217950Z" level=info msg="StartContainer for \"611088b1eeb90f4be564acf1bc29e63fddf0eb069b0cde4fae587f69898b20f6\" returns successfully" Feb 9 19:26:05.140000 audit[2469]: NETFILTER_CFG table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.164346 kernel: audit: type=1325 audit(1707506765.140:233): table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.164591 kernel: audit: type=1300 audit(1707506765.140:233): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc37ed8ba0 a2=0 a3=7ffc37ed8b8c items=0 ppid=2426 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.140000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc37ed8ba0 a2=0 a3=7ffc37ed8b8c items=0 ppid=2426 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:26:05.210253 kernel: audit: type=1327 audit(1707506765.140:233): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:26:05.210430 kernel: audit: type=1325 audit(1707506765.158:234): table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.158000 audit[2470]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.222545 env[1221]: time="2024-02-09T19:26:05.222491233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-4ggh6,Uid:1b82a50d-23aa-4c2e-82c7-6765cc6ac8aa,Namespace:tigera-operator,Attempt:0,}" Feb 9 19:26:05.158000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd5324c00 a2=0 a3=7ffdd5324bec items=0 ppid=2426 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.257945 kernel: audit: type=1300 audit(1707506765.158:234): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdd5324c00 a2=0 a3=7ffdd5324bec items=0 ppid=2426 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.258103 kernel: audit: type=1327 audit(1707506765.158:234): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:26:05.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:26:05.170000 audit[2471]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.170000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2a8f8d30 a2=0 a3=7fff2a8f8d1c items=0 ppid=2426 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.316045 env[1221]: time="2024-02-09T19:26:05.315960644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:05.316307 env[1221]: time="2024-02-09T19:26:05.316259715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:05.316470 env[1221]: time="2024-02-09T19:26:05.316438053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:05.316777 env[1221]: time="2024-02-09T19:26:05.316741915Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f2a3896963daaa41c160a4688ba537eb58afcc34a9cd2c5717cc7881bd62cfa pid=2506 runtime=io.containerd.runc.v2 Feb 9 19:26:05.326059 kernel: audit: type=1325 audit(1707506765.170:235): table=nat:61 family=10 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.326198 kernel: audit: type=1300 audit(1707506765.170:235): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2a8f8d30 a2=0 a3=7fff2a8f8d1c items=0 ppid=2426 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.170000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:26:05.173000 audit[2472]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.365620 kernel: audit: type=1327 audit(1707506765.170:235): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:26:05.365751 kernel: audit: type=1325 audit(1707506765.173:236): table=nat:62 family=2 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.173000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf59c81e0 a2=0 a3=7ffcf59c81cc items=0 ppid=2426 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:26:05.179000 audit[2473]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.179000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff10818870 a2=0 a3=7fff1081885c items=0 ppid=2426 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:26:05.181000 audit[2474]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.181000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff29062850 a2=0 a3=7fff2906283c items=0 ppid=2426 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.181000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:26:05.245000 audit[2475]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.245000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdd24f92a0 a2=0 a3=7ffdd24f928c items=0 ppid=2426 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:26:05.253000 audit[2477]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.253000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffde6373360 a2=0 a3=7ffde637334c items=0 ppid=2426 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:26:05.256000 audit[2480]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.256000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff3634fd50 a2=0 a3=7fff3634fd3c items=0 ppid=2426 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:26:05.261000 audit[2481]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.261000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1b50c3d0 a2=0 a3=7fff1b50c3bc items=0 ppid=2426 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:26:05.261000 audit[2483]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.261000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe5a704f90 a2=0 a3=7ffe5a704f7c items=0 ppid=2426 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:26:05.266000 audit[2484]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.266000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2b4c5bd0 a2=0 a3=7ffe2b4c5bbc items=0 ppid=2426 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:26:05.271000 audit[2486]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.271000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe237c34d0 a2=0 a3=7ffe237c34bc items=0 ppid=2426 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:26:05.273000 audit[2489]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.273000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc432a3cf0 a2=0 a3=7ffc432a3cdc items=0 ppid=2426 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:26:05.279000 audit[2490]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.279000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb7a89900 a2=0 a3=7ffeb7a898ec items=0 ppid=2426 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:26:05.284000 audit[2492]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.284000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff591f6590 a2=0 a3=7fff591f657c items=0 ppid=2426 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:26:05.284000 audit[2493]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.284000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda72552c0 a2=0 a3=7ffda72552ac items=0 ppid=2426 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:26:05.292000 audit[2495]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.292000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe3bb65920 a2=0 a3=7ffe3bb6590c items=0 ppid=2426 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:26:05.317000 audit[2498]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.317000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd745cfa80 a2=0 a3=7ffd745cfa6c items=0 ppid=2426 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:26:05.383000 audit[2536]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.383000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe27e08b50 a2=0 a3=7ffe27e08b3c items=0 ppid=2426 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.383000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:26:05.385000 audit[2537]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.385000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee780ddb0 a2=0 a3=7ffee780dd9c items=0 ppid=2426 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.385000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:26:05.389000 audit[2539]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.389000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe890bc2e0 a2=0 a3=7ffe890bc2cc items=0 ppid=2426 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.389000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:26:05.400000 audit[2542]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:26:05.400000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8c2184b0 a2=0 a3=7ffe8c21849c items=0 ppid=2426 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.400000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:26:05.431000 audit[2546]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:05.431000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffe73283470 a2=0 a3=7ffe7328345c items=0 ppid=2426 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.431000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:05.442788 env[1221]: time="2024-02-09T19:26:05.442686786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-4ggh6,Uid:1b82a50d-23aa-4c2e-82c7-6765cc6ac8aa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2f2a3896963daaa41c160a4688ba537eb58afcc34a9cd2c5717cc7881bd62cfa\"" Feb 9 19:26:05.444000 audit[2546]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:05.444000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe73283470 a2=0 a3=7ffe7328345c items=0 ppid=2426 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.444000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:05.449849 kubelet[2267]: E0209 19:26:05.449318 2267 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Feb 9 19:26:05.448000 audit[2557]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.448000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe7a5f34f0 a2=0 a3=7ffe7a5f34dc items=0 ppid=2426 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:26:05.450670 env[1221]: time="2024-02-09T19:26:05.450626127Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 19:26:05.454000 audit[2559]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.454000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff74442940 a2=0 a3=7fff7444292c items=0 ppid=2426 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:26:05.466000 audit[2562]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.466000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd2a69c180 a2=0 a3=7ffd2a69c16c items=0 ppid=2426 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:26:05.469000 audit[2563]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.469000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2c750610 a2=0 a3=7ffd2c7505fc items=0 ppid=2426 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:26:05.475000 audit[2565]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.475000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc8ebe9120 a2=0 a3=7ffc8ebe910c items=0 ppid=2426 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:26:05.478000 audit[2566]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.478000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7562b8a0 a2=0 a3=7ffc7562b88c items=0 ppid=2426 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:26:05.482000 audit[2568]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.482000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd56175e40 a2=0 a3=7ffd56175e2c items=0 ppid=2426 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:26:05.488000 audit[2571]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2571 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.488000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffee5bef740 a2=0 a3=7ffee5bef72c items=0 ppid=2426 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.488000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:26:05.490000 audit[2572]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.490000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3a611030 a2=0 a3=7ffe3a61101c items=0 ppid=2426 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:26:05.493000 audit[2574]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.493000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffd59922a0 a2=0 a3=7fffd599228c items=0 ppid=2426 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:26:05.496000 audit[2575]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.496000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc49385d0 a2=0 a3=7ffdc49385bc items=0 ppid=2426 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.496000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:26:05.501000 audit[2577]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.501000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9bd1f7f0 a2=0 a3=7ffc9bd1f7dc items=0 ppid=2426 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.501000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:26:05.506000 audit[2580]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2580 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.506000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd04f2ed30 a2=0 a3=7ffd04f2ed1c items=0 ppid=2426 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:26:05.511000 audit[2583]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.511000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff50609570 a2=0 a3=7fff5060955c items=0 ppid=2426 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.511000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:26:05.514000 audit[2584]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.514000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc28be02e0 a2=0 a3=7ffc28be02cc items=0 ppid=2426 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.514000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:26:05.518000 audit[2586]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.518000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc44286c90 a2=0 a3=7ffc44286c7c items=0 ppid=2426 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.518000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:26:05.524000 audit[2589]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:26:05.524000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe21f1c4e0 a2=0 a3=7ffe21f1c4cc items=0 ppid=2426 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.524000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:26:05.535000 audit[2593]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:26:05.535000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd39c472d0 a2=0 a3=7ffd39c472bc items=0 ppid=2426 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.535000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:05.536000 audit[2593]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:26:05.536000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffd39c472d0 a2=0 a3=7ffd39c472bc items=0 ppid=2426 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:05.536000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:06.284172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460010950.mount: Deactivated successfully. Feb 9 19:26:07.469583 env[1221]: time="2024-02-09T19:26:07.469505100Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:07.472970 env[1221]: time="2024-02-09T19:26:07.472912477Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:07.476746 env[1221]: time="2024-02-09T19:26:07.476674980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:07.479971 env[1221]: time="2024-02-09T19:26:07.479907456Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:07.481541 env[1221]: time="2024-02-09T19:26:07.481485475Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 9 19:26:07.486695 env[1221]: time="2024-02-09T19:26:07.486652753Z" level=info msg="CreateContainer within sandbox \"2f2a3896963daaa41c160a4688ba537eb58afcc34a9cd2c5717cc7881bd62cfa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 19:26:07.508531 env[1221]: time="2024-02-09T19:26:07.508444506Z" level=info msg="CreateContainer within sandbox \"2f2a3896963daaa41c160a4688ba537eb58afcc34a9cd2c5717cc7881bd62cfa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"26ba39b98fafc7d49907cf9e28a5e63a85e016c4082ae745ce2c3ee6d3bda20e\"" Feb 9 19:26:07.510325 env[1221]: time="2024-02-09T19:26:07.509551602Z" level=info msg="StartContainer for \"26ba39b98fafc7d49907cf9e28a5e63a85e016c4082ae745ce2c3ee6d3bda20e\"" Feb 9 19:26:07.562005 systemd[1]: run-containerd-runc-k8s.io-26ba39b98fafc7d49907cf9e28a5e63a85e016c4082ae745ce2c3ee6d3bda20e-runc.q3kmGF.mount: Deactivated successfully. Feb 9 19:26:07.619678 env[1221]: time="2024-02-09T19:26:07.619608733Z" level=info msg="StartContainer for \"26ba39b98fafc7d49907cf9e28a5e63a85e016c4082ae745ce2c3ee6d3bda20e\" returns successfully" Feb 9 19:26:07.808980 kubelet[2267]: I0209 19:26:07.806493 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rwwsw" podStartSLOduration=3.8062526 pod.CreationTimestamp="2024-02-09 19:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:05.788360375 +0000 UTC m=+14.746038410" watchObservedRunningTime="2024-02-09 19:26:07.8062526 +0000 UTC m=+16.763930638" Feb 9 19:26:09.997000 audit[2658]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2658 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:09.997000 audit[2658]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffeb3ae6760 a2=0 a3=7ffeb3ae674c items=0 ppid=2426 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:09.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:09.998000 audit[2658]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2658 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:09.998000 audit[2658]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffeb3ae6760 a2=0 a3=7ffeb3ae674c items=0 ppid=2426 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:09.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:10.063000 audit[2684]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:10.063000 audit[2684]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffcf84ae870 a2=0 a3=7ffcf84ae85c items=0 ppid=2426 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:10.063000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:10.064000 audit[2684]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:10.064000 audit[2684]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffcf84ae870 a2=0 a3=7ffcf84ae85c items=0 ppid=2426 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:10.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:10.095047 kubelet[2267]: I0209 19:26:10.095003 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-4ggh6" podStartSLOduration=-9.223372030759825e+09 pod.CreationTimestamp="2024-02-09 19:26:04 +0000 UTC" firstStartedPulling="2024-02-09 19:26:05.444089768 +0000 UTC m=+14.401767783" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:07.8069834 +0000 UTC m=+16.764661436" watchObservedRunningTime="2024-02-09 19:26:10.094951107 +0000 UTC m=+19.052629215" Feb 9 19:26:10.096123 kubelet[2267]: I0209 19:26:10.096076 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:10.174248 kubelet[2267]: I0209 19:26:10.174209 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8c0c9af-f3eb-4761-8695-14b0098caf97-tigera-ca-bundle\") pod \"calico-typha-7f8bddf656-hw44g\" (UID: \"e8c0c9af-f3eb-4761-8695-14b0098caf97\") " pod="calico-system/calico-typha-7f8bddf656-hw44g" Feb 9 19:26:10.174660 kubelet[2267]: I0209 19:26:10.174640 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j946c\" (UniqueName: \"kubernetes.io/projected/e8c0c9af-f3eb-4761-8695-14b0098caf97-kube-api-access-j946c\") pod \"calico-typha-7f8bddf656-hw44g\" (UID: \"e8c0c9af-f3eb-4761-8695-14b0098caf97\") " pod="calico-system/calico-typha-7f8bddf656-hw44g" Feb 9 19:26:10.174842 kubelet[2267]: I0209 19:26:10.174823 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e8c0c9af-f3eb-4761-8695-14b0098caf97-typha-certs\") pod \"calico-typha-7f8bddf656-hw44g\" (UID: \"e8c0c9af-f3eb-4761-8695-14b0098caf97\") " pod="calico-system/calico-typha-7f8bddf656-hw44g" Feb 9 19:26:10.217523 kubelet[2267]: I0209 19:26:10.217480 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:10.276317 kubelet[2267]: I0209 19:26:10.276142 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqkrk\" (UniqueName: \"kubernetes.io/projected/29d5747b-48e5-42c6-bbcc-5f8820426c55-kube-api-access-cqkrk\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.276317 kubelet[2267]: I0209 19:26:10.276214 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/29d5747b-48e5-42c6-bbcc-5f8820426c55-cni-net-dir\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.276317 kubelet[2267]: I0209 19:26:10.276253 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29d5747b-48e5-42c6-bbcc-5f8820426c55-tigera-ca-bundle\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.276317 kubelet[2267]: I0209 19:26:10.276325 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/29d5747b-48e5-42c6-bbcc-5f8820426c55-node-certs\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.276674 kubelet[2267]: I0209 19:26:10.276357 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/29d5747b-48e5-42c6-bbcc-5f8820426c55-cni-log-dir\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.276674 kubelet[2267]: I0209 19:26:10.276385 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29d5747b-48e5-42c6-bbcc-5f8820426c55-lib-modules\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.276674 kubelet[2267]: I0209 19:26:10.276458 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29d5747b-48e5-42c6-bbcc-5f8820426c55-xtables-lock\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.276674 kubelet[2267]: I0209 19:26:10.276489 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/29d5747b-48e5-42c6-bbcc-5f8820426c55-var-run-calico\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.276674 kubelet[2267]: I0209 19:26:10.276523 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/29d5747b-48e5-42c6-bbcc-5f8820426c55-flexvol-driver-host\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.277003 kubelet[2267]: I0209 19:26:10.276562 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/29d5747b-48e5-42c6-bbcc-5f8820426c55-policysync\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.277003 kubelet[2267]: I0209 19:26:10.276599 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/29d5747b-48e5-42c6-bbcc-5f8820426c55-cni-bin-dir\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.277003 kubelet[2267]: I0209 19:26:10.276639 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/29d5747b-48e5-42c6-bbcc-5f8820426c55-var-lib-calico\") pod \"calico-node-mdqjh\" (UID: \"29d5747b-48e5-42c6-bbcc-5f8820426c55\") " pod="calico-system/calico-node-mdqjh" Feb 9 19:26:10.340406 kubelet[2267]: I0209 19:26:10.340364 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:10.341063 kubelet[2267]: E0209 19:26:10.341035 2267 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qwfs" podUID=4296df02-d23c-4462-abee-22483f63c36c Feb 9 19:26:10.377840 kubelet[2267]: I0209 19:26:10.377806 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4296df02-d23c-4462-abee-22483f63c36c-registration-dir\") pod \"csi-node-driver-4qwfs\" (UID: \"4296df02-d23c-4462-abee-22483f63c36c\") " pod="calico-system/csi-node-driver-4qwfs" Feb 9 19:26:10.379224 kubelet[2267]: I0209 19:26:10.379200 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4296df02-d23c-4462-abee-22483f63c36c-varrun\") pod \"csi-node-driver-4qwfs\" (UID: \"4296df02-d23c-4462-abee-22483f63c36c\") " pod="calico-system/csi-node-driver-4qwfs" Feb 9 19:26:10.379623 kubelet[2267]: I0209 19:26:10.379580 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4296df02-d23c-4462-abee-22483f63c36c-socket-dir\") pod \"csi-node-driver-4qwfs\" (UID: \"4296df02-d23c-4462-abee-22483f63c36c\") " pod="calico-system/csi-node-driver-4qwfs" Feb 9 19:26:10.379926 kubelet[2267]: I0209 19:26:10.379893 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4296df02-d23c-4462-abee-22483f63c36c-kubelet-dir\") pod \"csi-node-driver-4qwfs\" (UID: \"4296df02-d23c-4462-abee-22483f63c36c\") " pod="calico-system/csi-node-driver-4qwfs" Feb 9 19:26:10.380123 kubelet[2267]: I0209 19:26:10.380096 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5knwc\" (UniqueName: \"kubernetes.io/projected/4296df02-d23c-4462-abee-22483f63c36c-kube-api-access-5knwc\") pod \"csi-node-driver-4qwfs\" (UID: \"4296df02-d23c-4462-abee-22483f63c36c\") " pod="calico-system/csi-node-driver-4qwfs" Feb 9 19:26:10.382491 kubelet[2267]: E0209 19:26:10.382464 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.382622 kubelet[2267]: W0209 19:26:10.382489 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.382622 kubelet[2267]: E0209 19:26:10.382522 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.386935 kubelet[2267]: E0209 19:26:10.386454 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.386935 kubelet[2267]: W0209 19:26:10.386475 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.386935 kubelet[2267]: E0209 19:26:10.386583 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.386935 kubelet[2267]: E0209 19:26:10.386850 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.386935 kubelet[2267]: W0209 19:26:10.386862 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.386935 kubelet[2267]: E0209 19:26:10.386885 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.403061 env[1221]: time="2024-02-09T19:26:10.402363651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f8bddf656-hw44g,Uid:e8c0c9af-f3eb-4761-8695-14b0098caf97,Namespace:calico-system,Attempt:0,}" Feb 9 19:26:10.406324 kubelet[2267]: E0209 19:26:10.404524 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.406324 kubelet[2267]: W0209 19:26:10.404543 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.406324 kubelet[2267]: E0209 19:26:10.404571 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.455434 env[1221]: time="2024-02-09T19:26:10.454797179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:10.455434 env[1221]: time="2024-02-09T19:26:10.454851701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:10.455434 env[1221]: time="2024-02-09T19:26:10.454869295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:10.456266 env[1221]: time="2024-02-09T19:26:10.456173312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5ea2a58f14c619eb174e9d3508abd8a99c3e2963d9f6c910bf2b37df633f8bb pid=2698 runtime=io.containerd.runc.v2 Feb 9 19:26:10.486336 kubelet[2267]: E0209 19:26:10.485371 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.486336 kubelet[2267]: W0209 19:26:10.485399 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.486336 kubelet[2267]: E0209 19:26:10.485457 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.486336 kubelet[2267]: E0209 19:26:10.485948 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.486336 kubelet[2267]: W0209 19:26:10.485963 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.486336 kubelet[2267]: E0209 19:26:10.485990 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.487262 kubelet[2267]: E0209 19:26:10.486953 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.487262 kubelet[2267]: W0209 19:26:10.486971 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.487262 kubelet[2267]: E0209 19:26:10.486998 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.488230 kubelet[2267]: E0209 19:26:10.487981 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.488230 kubelet[2267]: W0209 19:26:10.487999 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.488230 kubelet[2267]: E0209 19:26:10.488074 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.489016 kubelet[2267]: E0209 19:26:10.488899 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.489016 kubelet[2267]: W0209 19:26:10.488915 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.489331 kubelet[2267]: E0209 19:26:10.489227 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.495693 kubelet[2267]: E0209 19:26:10.489597 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.495933 kubelet[2267]: W0209 19:26:10.495872 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.496241 kubelet[2267]: E0209 19:26:10.496222 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.496626 kubelet[2267]: E0209 19:26:10.496610 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.496756 kubelet[2267]: W0209 19:26:10.496739 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.496989 kubelet[2267]: E0209 19:26:10.496972 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.497323 kubelet[2267]: E0209 19:26:10.497307 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.497451 kubelet[2267]: W0209 19:26:10.497434 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.497663 kubelet[2267]: E0209 19:26:10.497649 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.498026 kubelet[2267]: E0209 19:26:10.498010 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.498160 kubelet[2267]: W0209 19:26:10.498144 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.498358 kubelet[2267]: E0209 19:26:10.498342 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.498765 kubelet[2267]: E0209 19:26:10.498749 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.498904 kubelet[2267]: W0209 19:26:10.498887 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.499089 kubelet[2267]: E0209 19:26:10.499072 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.500012 kubelet[2267]: E0209 19:26:10.499995 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.500199 kubelet[2267]: W0209 19:26:10.500177 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.500514 kubelet[2267]: E0209 19:26:10.500500 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.500971 kubelet[2267]: E0209 19:26:10.500956 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.501180 kubelet[2267]: W0209 19:26:10.501139 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.501537 kubelet[2267]: E0209 19:26:10.501521 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.502183 kubelet[2267]: E0209 19:26:10.502166 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.502419 kubelet[2267]: W0209 19:26:10.502369 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.502773 kubelet[2267]: E0209 19:26:10.502756 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.503320 kubelet[2267]: E0209 19:26:10.503279 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.503481 kubelet[2267]: W0209 19:26:10.503463 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.503714 kubelet[2267]: E0209 19:26:10.503699 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.504219 kubelet[2267]: E0209 19:26:10.504204 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.504396 kubelet[2267]: W0209 19:26:10.504378 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.504646 kubelet[2267]: E0209 19:26:10.504632 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.505016 kubelet[2267]: E0209 19:26:10.505002 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.505158 kubelet[2267]: W0209 19:26:10.505139 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.505352 kubelet[2267]: E0209 19:26:10.505337 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.505794 kubelet[2267]: E0209 19:26:10.505777 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.505943 kubelet[2267]: W0209 19:26:10.505923 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.506258 kubelet[2267]: E0209 19:26:10.506241 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.507176 kubelet[2267]: E0209 19:26:10.507156 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.507649 kubelet[2267]: W0209 19:26:10.507627 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.513335 kubelet[2267]: E0209 19:26:10.510481 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.518658 kubelet[2267]: E0209 19:26:10.516741 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.518658 kubelet[2267]: W0209 19:26:10.516762 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.518658 kubelet[2267]: E0209 19:26:10.516915 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.518658 kubelet[2267]: E0209 19:26:10.517115 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.518658 kubelet[2267]: W0209 19:26:10.517126 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.518658 kubelet[2267]: E0209 19:26:10.517232 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.519705 kubelet[2267]: E0209 19:26:10.519687 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.519842 kubelet[2267]: W0209 19:26:10.519822 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.520136 kubelet[2267]: E0209 19:26:10.520086 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.520558 kubelet[2267]: E0209 19:26:10.520543 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.520704 kubelet[2267]: W0209 19:26:10.520685 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.520935 kubelet[2267]: E0209 19:26:10.520921 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.521261 kubelet[2267]: E0209 19:26:10.521237 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.523451 kubelet[2267]: W0209 19:26:10.523430 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.523585 kubelet[2267]: E0209 19:26:10.523572 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.524054 kubelet[2267]: E0209 19:26:10.524038 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.525368 kubelet[2267]: W0209 19:26:10.525340 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.525475 kubelet[2267]: E0209 19:26:10.525389 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.533992 kubelet[2267]: E0209 19:26:10.529264 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.534205 kubelet[2267]: W0209 19:26:10.534181 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.534723 kubelet[2267]: E0209 19:26:10.534704 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.534855 kubelet[2267]: W0209 19:26:10.534835 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.534982 kubelet[2267]: E0209 19:26:10.534966 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.535139 kubelet[2267]: E0209 19:26:10.535120 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.602809 kubelet[2267]: E0209 19:26:10.602778 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.603091 kubelet[2267]: W0209 19:26:10.603067 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.603261 kubelet[2267]: E0209 19:26:10.603244 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.605473 kubelet[2267]: E0209 19:26:10.605449 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.605715 kubelet[2267]: W0209 19:26:10.605689 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.605874 kubelet[2267]: E0209 19:26:10.605856 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.642572 env[1221]: time="2024-02-09T19:26:10.642520311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f8bddf656-hw44g,Uid:e8c0c9af-f3eb-4761-8695-14b0098caf97,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5ea2a58f14c619eb174e9d3508abd8a99c3e2963d9f6c910bf2b37df633f8bb\"" Feb 9 19:26:10.644945 env[1221]: time="2024-02-09T19:26:10.644908306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 19:26:10.706865 kubelet[2267]: E0209 19:26:10.706826 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.707136 kubelet[2267]: W0209 19:26:10.707070 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.707136 kubelet[2267]: E0209 19:26:10.707107 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.707740 kubelet[2267]: E0209 19:26:10.707719 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.707890 kubelet[2267]: W0209 19:26:10.707873 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.708029 kubelet[2267]: E0209 19:26:10.708016 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.717474 kubelet[2267]: E0209 19:26:10.717446 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.717696 kubelet[2267]: W0209 19:26:10.717674 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.717823 kubelet[2267]: E0209 19:26:10.717809 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.809142 kubelet[2267]: E0209 19:26:10.809027 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.809382 kubelet[2267]: W0209 19:26:10.809360 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.809503 kubelet[2267]: E0209 19:26:10.809488 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.824227 env[1221]: time="2024-02-09T19:26:10.824168455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mdqjh,Uid:29d5747b-48e5-42c6-bbcc-5f8820426c55,Namespace:calico-system,Attempt:0,}" Feb 9 19:26:10.852800 env[1221]: time="2024-02-09T19:26:10.852645719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:10.853679 env[1221]: time="2024-02-09T19:26:10.853590748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:10.853954 env[1221]: time="2024-02-09T19:26:10.853916627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:10.854540 env[1221]: time="2024-02-09T19:26:10.854446283Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97f3275c87fda520bd6a02c24e545611ded03db91d57c72bbf6615e5f5cf046d pid=2780 runtime=io.containerd.runc.v2 Feb 9 19:26:10.911242 kubelet[2267]: E0209 19:26:10.911106 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.911242 kubelet[2267]: W0209 19:26:10.911130 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.911242 kubelet[2267]: E0209 19:26:10.911177 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.919838 kubelet[2267]: E0209 19:26:10.919811 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:10.920075 kubelet[2267]: W0209 19:26:10.920056 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:10.920173 kubelet[2267]: E0209 19:26:10.920161 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:10.959111 env[1221]: time="2024-02-09T19:26:10.959041407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mdqjh,Uid:29d5747b-48e5-42c6-bbcc-5f8820426c55,Namespace:calico-system,Attempt:0,} returns sandbox id \"97f3275c87fda520bd6a02c24e545611ded03db91d57c72bbf6615e5f5cf046d\"" Feb 9 19:26:11.229000 audit[2843]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:11.239163 kernel: kauditd_printk_skb: 134 callbacks suppressed Feb 9 19:26:11.239345 kernel: audit: type=1325 audit(1707506771.229:281): table=filter:107 family=2 entries=14 op=nft_register_rule pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:11.229000 audit[2843]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffc55671860 a2=0 a3=7ffc5567184c items=0 ppid=2426 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:11.288394 kernel: audit: type=1300 audit(1707506771.229:281): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffc55671860 a2=0 a3=7ffc5567184c items=0 ppid=2426 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:11.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:11.315318 kernel: audit: type=1327 audit(1707506771.229:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:11.229000 audit[2843]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:11.334333 kernel: audit: type=1325 audit(1707506771.229:282): table=nat:108 family=2 entries=20 op=nft_register_rule pid=2843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:11.334489 kernel: audit: type=1300 audit(1707506771.229:282): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc55671860 a2=0 a3=7ffc5567184c items=0 ppid=2426 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:11.229000 audit[2843]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc55671860 a2=0 a3=7ffc5567184c items=0 ppid=2426 pid=2843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:11.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:11.382491 kernel: audit: type=1327 audit(1707506771.229:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:11.717529 kubelet[2267]: E0209 19:26:11.717491 2267 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qwfs" podUID=4296df02-d23c-4462-abee-22483f63c36c Feb 9 19:26:11.922852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106244622.mount: Deactivated successfully. Feb 9 19:26:13.717826 kubelet[2267]: E0209 19:26:13.716442 2267 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qwfs" podUID=4296df02-d23c-4462-abee-22483f63c36c Feb 9 19:26:13.798851 env[1221]: time="2024-02-09T19:26:13.798786514Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:13.804053 env[1221]: time="2024-02-09T19:26:13.803997173Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:13.807257 env[1221]: time="2024-02-09T19:26:13.807216073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:13.809923 env[1221]: time="2024-02-09T19:26:13.809880972Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:13.811508 env[1221]: time="2024-02-09T19:26:13.811464903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 9 19:26:13.815350 env[1221]: time="2024-02-09T19:26:13.813958094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:26:13.842318 env[1221]: time="2024-02-09T19:26:13.841152041Z" level=info msg="CreateContainer within sandbox \"d5ea2a58f14c619eb174e9d3508abd8a99c3e2963d9f6c910bf2b37df633f8bb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:26:13.889330 env[1221]: time="2024-02-09T19:26:13.887620821Z" level=info msg="CreateContainer within sandbox \"d5ea2a58f14c619eb174e9d3508abd8a99c3e2963d9f6c910bf2b37df633f8bb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e2739e3aad2e0dbea4b124abd8253d6a445003f513ce18ec0d9c2baed008a779\"" Feb 9 19:26:13.889330 env[1221]: time="2024-02-09T19:26:13.888739312Z" level=info msg="StartContainer for \"e2739e3aad2e0dbea4b124abd8253d6a445003f513ce18ec0d9c2baed008a779\"" Feb 9 19:26:14.023040 env[1221]: time="2024-02-09T19:26:14.022890804Z" level=info msg="StartContainer for \"e2739e3aad2e0dbea4b124abd8253d6a445003f513ce18ec0d9c2baed008a779\" returns successfully" Feb 9 19:26:14.902304 kubelet[2267]: E0209 19:26:14.902247 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.902304 kubelet[2267]: W0209 19:26:14.902275 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.903039 kubelet[2267]: E0209 19:26:14.902318 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.903039 kubelet[2267]: E0209 19:26:14.902640 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.903039 kubelet[2267]: W0209 19:26:14.902652 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.903039 kubelet[2267]: E0209 19:26:14.902714 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.903039 kubelet[2267]: E0209 19:26:14.902989 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.903039 kubelet[2267]: W0209 19:26:14.903004 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.903039 kubelet[2267]: E0209 19:26:14.903023 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.903465 kubelet[2267]: E0209 19:26:14.903336 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.903465 kubelet[2267]: W0209 19:26:14.903349 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.903465 kubelet[2267]: E0209 19:26:14.903368 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.903637 kubelet[2267]: E0209 19:26:14.903612 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.903637 kubelet[2267]: W0209 19:26:14.903623 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.903775 kubelet[2267]: E0209 19:26:14.903640 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.903979 kubelet[2267]: E0209 19:26:14.903892 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.903979 kubelet[2267]: W0209 19:26:14.903908 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.903979 kubelet[2267]: E0209 19:26:14.903931 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.904316 kubelet[2267]: E0209 19:26:14.904251 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.904316 kubelet[2267]: W0209 19:26:14.904267 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.904316 kubelet[2267]: E0209 19:26:14.904285 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.904604 kubelet[2267]: E0209 19:26:14.904560 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.904604 kubelet[2267]: W0209 19:26:14.904576 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.904604 kubelet[2267]: E0209 19:26:14.904594 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.904856 kubelet[2267]: E0209 19:26:14.904841 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.904856 kubelet[2267]: W0209 19:26:14.904856 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.905029 kubelet[2267]: E0209 19:26:14.904873 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.905194 kubelet[2267]: E0209 19:26:14.905140 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.905194 kubelet[2267]: W0209 19:26:14.905155 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.905194 kubelet[2267]: E0209 19:26:14.905174 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.905476 kubelet[2267]: E0209 19:26:14.905463 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.905476 kubelet[2267]: W0209 19:26:14.905475 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.905592 kubelet[2267]: E0209 19:26:14.905494 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.905875 kubelet[2267]: E0209 19:26:14.905779 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.905875 kubelet[2267]: W0209 19:26:14.905792 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.905875 kubelet[2267]: E0209 19:26:14.905813 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.950624 kubelet[2267]: E0209 19:26:14.950588 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.950624 kubelet[2267]: W0209 19:26:14.950617 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.950987 kubelet[2267]: E0209 19:26:14.950646 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.951079 kubelet[2267]: E0209 19:26:14.951045 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.951079 kubelet[2267]: W0209 19:26:14.951059 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.951238 kubelet[2267]: E0209 19:26:14.951087 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.951542 kubelet[2267]: E0209 19:26:14.951478 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.951542 kubelet[2267]: W0209 19:26:14.951494 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.951542 kubelet[2267]: E0209 19:26:14.951520 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.951923 kubelet[2267]: E0209 19:26:14.951870 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.951923 kubelet[2267]: W0209 19:26:14.951892 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.951923 kubelet[2267]: E0209 19:26:14.951917 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.952232 kubelet[2267]: E0209 19:26:14.952212 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.952232 kubelet[2267]: W0209 19:26:14.952231 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.952407 kubelet[2267]: E0209 19:26:14.952356 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.952555 kubelet[2267]: E0209 19:26:14.952534 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.952555 kubelet[2267]: W0209 19:26:14.952553 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.952719 kubelet[2267]: E0209 19:26:14.952676 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.953322 kubelet[2267]: E0209 19:26:14.952914 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.953322 kubelet[2267]: W0209 19:26:14.952929 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.953322 kubelet[2267]: E0209 19:26:14.953218 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.953569 kubelet[2267]: E0209 19:26:14.953486 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.953569 kubelet[2267]: W0209 19:26:14.953499 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.953569 kubelet[2267]: E0209 19:26:14.953522 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.955333 kubelet[2267]: E0209 19:26:14.953850 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.955333 kubelet[2267]: W0209 19:26:14.953867 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.955333 kubelet[2267]: E0209 19:26:14.953976 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.955333 kubelet[2267]: E0209 19:26:14.954649 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.955333 kubelet[2267]: W0209 19:26:14.954663 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.955333 kubelet[2267]: E0209 19:26:14.954789 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.955333 kubelet[2267]: E0209 19:26:14.954973 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.955333 kubelet[2267]: W0209 19:26:14.954984 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.955333 kubelet[2267]: E0209 19:26:14.955089 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.955333 kubelet[2267]: E0209 19:26:14.955265 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.955925 kubelet[2267]: W0209 19:26:14.955276 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.955925 kubelet[2267]: E0209 19:26:14.955319 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.955925 kubelet[2267]: E0209 19:26:14.955592 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.955925 kubelet[2267]: W0209 19:26:14.955605 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.955925 kubelet[2267]: E0209 19:26:14.955630 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.956188 kubelet[2267]: E0209 19:26:14.955948 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.956188 kubelet[2267]: W0209 19:26:14.955960 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.956188 kubelet[2267]: E0209 19:26:14.955982 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.957334 kubelet[2267]: E0209 19:26:14.956550 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.957334 kubelet[2267]: W0209 19:26:14.956566 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.957334 kubelet[2267]: E0209 19:26:14.956688 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.957334 kubelet[2267]: E0209 19:26:14.956886 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.957334 kubelet[2267]: W0209 19:26:14.956898 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.957334 kubelet[2267]: E0209 19:26:14.956915 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.957334 kubelet[2267]: E0209 19:26:14.957194 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.957334 kubelet[2267]: W0209 19:26:14.957208 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.957334 kubelet[2267]: E0209 19:26:14.957227 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:14.957859 kubelet[2267]: E0209 19:26:14.957737 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:14.957859 kubelet[2267]: W0209 19:26:14.957752 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:14.957859 kubelet[2267]: E0209 19:26:14.957771 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.720390 kubelet[2267]: E0209 19:26:15.716114 2267 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qwfs" podUID=4296df02-d23c-4462-abee-22483f63c36c Feb 9 19:26:15.795810 env[1221]: time="2024-02-09T19:26:15.795742727Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:15.803884 env[1221]: time="2024-02-09T19:26:15.803824354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:15.807656 env[1221]: time="2024-02-09T19:26:15.807587839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:15.810759 env[1221]: time="2024-02-09T19:26:15.810713587Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:15.811969 env[1221]: time="2024-02-09T19:26:15.811918415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 19:26:15.816366 env[1221]: time="2024-02-09T19:26:15.816256087Z" level=info msg="CreateContainer within sandbox \"97f3275c87fda520bd6a02c24e545611ded03db91d57c72bbf6615e5f5cf046d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:26:15.819593 kubelet[2267]: I0209 19:26:15.819561 2267 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:26:15.840045 env[1221]: time="2024-02-09T19:26:15.839978562Z" level=info msg="CreateContainer within sandbox \"97f3275c87fda520bd6a02c24e545611ded03db91d57c72bbf6615e5f5cf046d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad6ba9a633ec9c2a1f439cf843f670a7514a6cd5199dfc23a1f79fe128e00d70\"" Feb 9 19:26:15.841223 env[1221]: time="2024-02-09T19:26:15.841188373Z" level=info msg="StartContainer for \"ad6ba9a633ec9c2a1f439cf843f670a7514a6cd5199dfc23a1f79fe128e00d70\"" Feb 9 19:26:15.887994 systemd[1]: run-containerd-runc-k8s.io-ad6ba9a633ec9c2a1f439cf843f670a7514a6cd5199dfc23a1f79fe128e00d70-runc.3d1FGo.mount: Deactivated successfully. Feb 9 19:26:15.914564 kubelet[2267]: E0209 19:26:15.914497 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.914564 kubelet[2267]: W0209 19:26:15.914540 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.914564 kubelet[2267]: E0209 19:26:15.914579 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.915958 kubelet[2267]: E0209 19:26:15.915138 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.915958 kubelet[2267]: W0209 19:26:15.915167 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.915958 kubelet[2267]: E0209 19:26:15.915208 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.915958 kubelet[2267]: E0209 19:26:15.915625 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.915958 kubelet[2267]: W0209 19:26:15.915640 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.915958 kubelet[2267]: E0209 19:26:15.915660 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.916445 kubelet[2267]: E0209 19:26:15.915994 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.916445 kubelet[2267]: W0209 19:26:15.916007 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.916445 kubelet[2267]: E0209 19:26:15.916027 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.916445 kubelet[2267]: E0209 19:26:15.916281 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.916445 kubelet[2267]: W0209 19:26:15.916319 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.916445 kubelet[2267]: E0209 19:26:15.916339 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.916761 kubelet[2267]: E0209 19:26:15.916605 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.916761 kubelet[2267]: W0209 19:26:15.916619 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.916761 kubelet[2267]: E0209 19:26:15.916639 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.916988 kubelet[2267]: E0209 19:26:15.916964 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.916988 kubelet[2267]: W0209 19:26:15.916982 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.917165 kubelet[2267]: E0209 19:26:15.917000 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.917331 kubelet[2267]: E0209 19:26:15.917251 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.917331 kubelet[2267]: W0209 19:26:15.917263 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.917331 kubelet[2267]: E0209 19:26:15.917281 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.917598 kubelet[2267]: E0209 19:26:15.917576 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.917598 kubelet[2267]: W0209 19:26:15.917593 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.917761 kubelet[2267]: E0209 19:26:15.917611 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.917893 kubelet[2267]: E0209 19:26:15.917873 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.917893 kubelet[2267]: W0209 19:26:15.917890 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.918066 kubelet[2267]: E0209 19:26:15.917908 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.918177 kubelet[2267]: E0209 19:26:15.918157 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.918177 kubelet[2267]: W0209 19:26:15.918174 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.918364 kubelet[2267]: E0209 19:26:15.918191 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.918487 kubelet[2267]: E0209 19:26:15.918467 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.918487 kubelet[2267]: W0209 19:26:15.918484 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.918635 kubelet[2267]: E0209 19:26:15.918502 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.959207 env[1221]: time="2024-02-09T19:26:15.955438323Z" level=info msg="StartContainer for \"ad6ba9a633ec9c2a1f439cf843f670a7514a6cd5199dfc23a1f79fe128e00d70\" returns successfully" Feb 9 19:26:15.961459 kubelet[2267]: E0209 19:26:15.961431 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.961459 kubelet[2267]: W0209 19:26:15.961455 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.961682 kubelet[2267]: E0209 19:26:15.961503 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.962497 kubelet[2267]: E0209 19:26:15.962474 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.962611 kubelet[2267]: W0209 19:26:15.962508 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.962611 kubelet[2267]: E0209 19:26:15.962543 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.963210 kubelet[2267]: E0209 19:26:15.963184 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.963210 kubelet[2267]: W0209 19:26:15.963210 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.963395 kubelet[2267]: E0209 19:26:15.963238 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.963837 kubelet[2267]: E0209 19:26:15.963818 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.963927 kubelet[2267]: W0209 19:26:15.963844 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.963927 kubelet[2267]: E0209 19:26:15.963873 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.964795 kubelet[2267]: E0209 19:26:15.964743 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.964795 kubelet[2267]: W0209 19:26:15.964771 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.965202 kubelet[2267]: E0209 19:26:15.964874 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.965953 kubelet[2267]: E0209 19:26:15.965918 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.965953 kubelet[2267]: W0209 19:26:15.965951 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.966134 kubelet[2267]: E0209 19:26:15.966050 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.966686 kubelet[2267]: E0209 19:26:15.966385 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.966686 kubelet[2267]: W0209 19:26:15.966534 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.966686 kubelet[2267]: E0209 19:26:15.966578 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.967361 kubelet[2267]: E0209 19:26:15.967246 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.968053 kubelet[2267]: W0209 19:26:15.967266 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.968053 kubelet[2267]: E0209 19:26:15.967990 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.969408 kubelet[2267]: E0209 19:26:15.969187 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.969408 kubelet[2267]: W0209 19:26:15.969204 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.969408 kubelet[2267]: E0209 19:26:15.969357 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.969932 kubelet[2267]: E0209 19:26:15.969725 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.969932 kubelet[2267]: W0209 19:26:15.969752 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.969932 kubelet[2267]: E0209 19:26:15.969886 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.970138 kubelet[2267]: E0209 19:26:15.970041 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.970138 kubelet[2267]: W0209 19:26:15.970054 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.970362 kubelet[2267]: E0209 19:26:15.970311 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.970362 kubelet[2267]: E0209 19:26:15.970333 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.970362 kubelet[2267]: W0209 19:26:15.970344 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.970640 kubelet[2267]: E0209 19:26:15.970370 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.970737 kubelet[2267]: E0209 19:26:15.970646 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.970737 kubelet[2267]: W0209 19:26:15.970659 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.970737 kubelet[2267]: E0209 19:26:15.970683 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.975176 kubelet[2267]: E0209 19:26:15.974757 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.975176 kubelet[2267]: W0209 19:26:15.974776 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.975176 kubelet[2267]: E0209 19:26:15.974901 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.975176 kubelet[2267]: E0209 19:26:15.975061 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.975176 kubelet[2267]: W0209 19:26:15.975084 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.975176 kubelet[2267]: E0209 19:26:15.975104 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.975595 kubelet[2267]: E0209 19:26:15.975405 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.975595 kubelet[2267]: W0209 19:26:15.975417 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.975595 kubelet[2267]: E0209 19:26:15.975445 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.977339 kubelet[2267]: E0209 19:26:15.976314 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.977339 kubelet[2267]: W0209 19:26:15.976330 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.977339 kubelet[2267]: E0209 19:26:15.976357 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:15.977339 kubelet[2267]: E0209 19:26:15.976649 2267 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:26:15.977339 kubelet[2267]: W0209 19:26:15.976662 2267 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:26:15.977339 kubelet[2267]: E0209 19:26:15.976688 2267 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:26:16.586379 env[1221]: time="2024-02-09T19:26:16.586312381Z" level=info msg="shim disconnected" id=ad6ba9a633ec9c2a1f439cf843f670a7514a6cd5199dfc23a1f79fe128e00d70 Feb 9 19:26:16.586379 env[1221]: time="2024-02-09T19:26:16.586381659Z" level=warning msg="cleaning up after shim disconnected" id=ad6ba9a633ec9c2a1f439cf843f670a7514a6cd5199dfc23a1f79fe128e00d70 namespace=k8s.io Feb 9 19:26:16.586810 env[1221]: time="2024-02-09T19:26:16.586398139Z" level=info msg="cleaning up dead shim" Feb 9 19:26:16.598912 env[1221]: time="2024-02-09T19:26:16.598839852Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:26:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3006 runtime=io.containerd.runc.v2\n" Feb 9 19:26:16.825532 env[1221]: time="2024-02-09T19:26:16.825458084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:26:16.837427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad6ba9a633ec9c2a1f439cf843f670a7514a6cd5199dfc23a1f79fe128e00d70-rootfs.mount: Deactivated successfully. Feb 9 19:26:16.848713 kubelet[2267]: I0209 19:26:16.848680 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7f8bddf656-hw44g" podStartSLOduration=-9.223372030006144e+09 pod.CreationTimestamp="2024-02-09 19:26:10 +0000 UTC" firstStartedPulling="2024-02-09 19:26:10.644280255 +0000 UTC m=+19.601958279" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:14.886247636 +0000 UTC m=+23.843925672" watchObservedRunningTime="2024-02-09 19:26:16.848631885 +0000 UTC m=+25.806309922" Feb 9 19:26:17.716098 kubelet[2267]: E0209 19:26:17.716064 2267 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qwfs" podUID=4296df02-d23c-4462-abee-22483f63c36c Feb 9 19:26:19.715902 kubelet[2267]: E0209 19:26:19.715859 2267 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qwfs" podUID=4296df02-d23c-4462-abee-22483f63c36c Feb 9 19:26:21.717344 kubelet[2267]: E0209 19:26:21.717311 2267 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qwfs" podUID=4296df02-d23c-4462-abee-22483f63c36c Feb 9 19:26:22.205380 env[1221]: time="2024-02-09T19:26:22.205323029Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:22.211764 env[1221]: time="2024-02-09T19:26:22.211713378Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:22.215189 env[1221]: time="2024-02-09T19:26:22.215147396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:22.218104 env[1221]: time="2024-02-09T19:26:22.218044456Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:22.219323 env[1221]: time="2024-02-09T19:26:22.219244903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 19:26:22.224757 env[1221]: time="2024-02-09T19:26:22.224697271Z" level=info msg="CreateContainer within sandbox \"97f3275c87fda520bd6a02c24e545611ded03db91d57c72bbf6615e5f5cf046d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:26:22.247208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3420432096.mount: Deactivated successfully. Feb 9 19:26:22.255887 env[1221]: time="2024-02-09T19:26:22.255808637Z" level=info msg="CreateContainer within sandbox \"97f3275c87fda520bd6a02c24e545611ded03db91d57c72bbf6615e5f5cf046d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c9c3e42131bf6efbbdbb18e21d84f87fa8d4a9f7815c7b4cfb292f62f7cefb2a\"" Feb 9 19:26:22.258356 env[1221]: time="2024-02-09T19:26:22.256827191Z" level=info msg="StartContainer for \"c9c3e42131bf6efbbdbb18e21d84f87fa8d4a9f7815c7b4cfb292f62f7cefb2a\"" Feb 9 19:26:22.366572 env[1221]: time="2024-02-09T19:26:22.366515760Z" level=info msg="StartContainer for \"c9c3e42131bf6efbbdbb18e21d84f87fa8d4a9f7815c7b4cfb292f62f7cefb2a\" returns successfully" Feb 9 19:26:23.240819 systemd[1]: run-containerd-runc-k8s.io-c9c3e42131bf6efbbdbb18e21d84f87fa8d4a9f7815c7b4cfb292f62f7cefb2a-runc.NsokPm.mount: Deactivated successfully. Feb 9 19:26:23.252897 env[1221]: time="2024-02-09T19:26:23.252817885Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:26:23.288040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9c3e42131bf6efbbdbb18e21d84f87fa8d4a9f7815c7b4cfb292f62f7cefb2a-rootfs.mount: Deactivated successfully. Feb 9 19:26:23.302819 kubelet[2267]: I0209 19:26:23.302586 2267 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:26:23.337202 kubelet[2267]: I0209 19:26:23.337161 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:23.344019 kubelet[2267]: I0209 19:26:23.343990 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:23.345903 kubelet[2267]: I0209 19:26:23.345874 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:23.424464 kubelet[2267]: I0209 19:26:23.424424 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8qc9\" (UniqueName: \"kubernetes.io/projected/fb8e00fa-f6f2-47c4-a525-9d0f529ec36c-kube-api-access-t8qc9\") pod \"coredns-787d4945fb-kzz8v\" (UID: \"fb8e00fa-f6f2-47c4-a525-9d0f529ec36c\") " pod="kube-system/coredns-787d4945fb-kzz8v" Feb 9 19:26:23.424697 kubelet[2267]: I0209 19:26:23.424494 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks2g7\" (UniqueName: \"kubernetes.io/projected/f2e3bd37-daf7-4099-845b-c1f627963705-kube-api-access-ks2g7\") pod \"coredns-787d4945fb-fcq6k\" (UID: \"f2e3bd37-daf7-4099-845b-c1f627963705\") " pod="kube-system/coredns-787d4945fb-fcq6k" Feb 9 19:26:23.424697 kubelet[2267]: I0209 19:26:23.424534 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb8e00fa-f6f2-47c4-a525-9d0f529ec36c-config-volume\") pod \"coredns-787d4945fb-kzz8v\" (UID: \"fb8e00fa-f6f2-47c4-a525-9d0f529ec36c\") " pod="kube-system/coredns-787d4945fb-kzz8v" Feb 9 19:26:23.424697 kubelet[2267]: I0209 19:26:23.424577 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmkr\" (UniqueName: \"kubernetes.io/projected/5b12e0e6-a8af-4fcb-82b3-77090ee9a472-kube-api-access-mwmkr\") pod \"calico-kube-controllers-76f544f57c-zfhh4\" (UID: \"5b12e0e6-a8af-4fcb-82b3-77090ee9a472\") " pod="calico-system/calico-kube-controllers-76f544f57c-zfhh4" Feb 9 19:26:23.424697 kubelet[2267]: I0209 19:26:23.424616 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2e3bd37-daf7-4099-845b-c1f627963705-config-volume\") pod \"coredns-787d4945fb-fcq6k\" (UID: \"f2e3bd37-daf7-4099-845b-c1f627963705\") " pod="kube-system/coredns-787d4945fb-fcq6k" Feb 9 19:26:23.424697 kubelet[2267]: I0209 19:26:23.424664 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b12e0e6-a8af-4fcb-82b3-77090ee9a472-tigera-ca-bundle\") pod \"calico-kube-controllers-76f544f57c-zfhh4\" (UID: \"5b12e0e6-a8af-4fcb-82b3-77090ee9a472\") " pod="calico-system/calico-kube-controllers-76f544f57c-zfhh4" Feb 9 19:26:23.645043 env[1221]: time="2024-02-09T19:26:23.644900493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kzz8v,Uid:fb8e00fa-f6f2-47c4-a525-9d0f529ec36c,Namespace:kube-system,Attempt:0,}" Feb 9 19:26:23.661673 env[1221]: time="2024-02-09T19:26:23.661364385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fcq6k,Uid:f2e3bd37-daf7-4099-845b-c1f627963705,Namespace:kube-system,Attempt:0,}" Feb 9 19:26:23.673899 env[1221]: time="2024-02-09T19:26:23.673834262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f544f57c-zfhh4,Uid:5b12e0e6-a8af-4fcb-82b3-77090ee9a472,Namespace:calico-system,Attempt:0,}" Feb 9 19:26:23.720451 env[1221]: time="2024-02-09T19:26:23.720381789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4qwfs,Uid:4296df02-d23c-4462-abee-22483f63c36c,Namespace:calico-system,Attempt:0,}" Feb 9 19:26:24.052155 env[1221]: time="2024-02-09T19:26:24.052070031Z" level=info msg="shim disconnected" id=c9c3e42131bf6efbbdbb18e21d84f87fa8d4a9f7815c7b4cfb292f62f7cefb2a Feb 9 19:26:24.052155 env[1221]: time="2024-02-09T19:26:24.052137417Z" level=warning msg="cleaning up after shim disconnected" id=c9c3e42131bf6efbbdbb18e21d84f87fa8d4a9f7815c7b4cfb292f62f7cefb2a namespace=k8s.io Feb 9 19:26:24.052155 env[1221]: time="2024-02-09T19:26:24.052155319Z" level=info msg="cleaning up dead shim" Feb 9 19:26:24.073051 env[1221]: time="2024-02-09T19:26:24.072994950Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:26:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3089 runtime=io.containerd.runc.v2\n" Feb 9 19:26:24.300586 env[1221]: time="2024-02-09T19:26:24.300492945Z" level=error msg="Failed to destroy network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.308252 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6-shm.mount: Deactivated successfully. Feb 9 19:26:24.309516 env[1221]: time="2024-02-09T19:26:24.309451566Z" level=error msg="encountered an error cleaning up failed sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.309743 env[1221]: time="2024-02-09T19:26:24.309697168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kzz8v,Uid:fb8e00fa-f6f2-47c4-a525-9d0f529ec36c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.311709 kubelet[2267]: E0209 19:26:24.310132 2267 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.311709 kubelet[2267]: E0209 19:26:24.310212 2267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-kzz8v" Feb 9 19:26:24.311709 kubelet[2267]: E0209 19:26:24.310249 2267 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-kzz8v" Feb 9 19:26:24.312335 kubelet[2267]: E0209 19:26:24.310338 2267 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-kzz8v_kube-system(fb8e00fa-f6f2-47c4-a525-9d0f529ec36c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-kzz8v_kube-system(fb8e00fa-f6f2-47c4-a525-9d0f529ec36c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-kzz8v" podUID=fb8e00fa-f6f2-47c4-a525-9d0f529ec36c Feb 9 19:26:24.321401 env[1221]: time="2024-02-09T19:26:24.321342901Z" level=error msg="Failed to destroy network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.325391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b-shm.mount: Deactivated successfully. Feb 9 19:26:24.326939 env[1221]: time="2024-02-09T19:26:24.326887526Z" level=error msg="encountered an error cleaning up failed sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.327183 env[1221]: time="2024-02-09T19:26:24.327121640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4qwfs,Uid:4296df02-d23c-4462-abee-22483f63c36c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.329148 kubelet[2267]: E0209 19:26:24.327590 2267 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.329148 kubelet[2267]: E0209 19:26:24.327656 2267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4qwfs" Feb 9 19:26:24.329148 kubelet[2267]: E0209 19:26:24.327693 2267 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4qwfs" Feb 9 19:26:24.329446 kubelet[2267]: E0209 19:26:24.327764 2267 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4qwfs_calico-system(4296df02-d23c-4462-abee-22483f63c36c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4qwfs_calico-system(4296df02-d23c-4462-abee-22483f63c36c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4qwfs" podUID=4296df02-d23c-4462-abee-22483f63c36c Feb 9 19:26:24.331911 env[1221]: time="2024-02-09T19:26:24.331561415Z" level=error msg="Failed to destroy network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.335668 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178-shm.mount: Deactivated successfully. Feb 9 19:26:24.336730 env[1221]: time="2024-02-09T19:26:24.336669898Z" level=error msg="encountered an error cleaning up failed sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.336817 env[1221]: time="2024-02-09T19:26:24.336757053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fcq6k,Uid:f2e3bd37-daf7-4099-845b-c1f627963705,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.337014 kubelet[2267]: E0209 19:26:24.336990 2267 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.337128 kubelet[2267]: E0209 19:26:24.337052 2267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-fcq6k" Feb 9 19:26:24.337128 kubelet[2267]: E0209 19:26:24.337089 2267 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-fcq6k" Feb 9 19:26:24.337254 kubelet[2267]: E0209 19:26:24.337154 2267 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-fcq6k_kube-system(f2e3bd37-daf7-4099-845b-c1f627963705)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-fcq6k_kube-system(f2e3bd37-daf7-4099-845b-c1f627963705)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fcq6k" podUID=f2e3bd37-daf7-4099-845b-c1f627963705 Feb 9 19:26:24.348709 env[1221]: time="2024-02-09T19:26:24.348639957Z" level=error msg="Failed to destroy network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.352645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f-shm.mount: Deactivated successfully. Feb 9 19:26:24.353735 env[1221]: time="2024-02-09T19:26:24.353594067Z" level=error msg="encountered an error cleaning up failed sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.353735 env[1221]: time="2024-02-09T19:26:24.353695970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f544f57c-zfhh4,Uid:5b12e0e6-a8af-4fcb-82b3-77090ee9a472,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.354021 kubelet[2267]: E0209 19:26:24.353980 2267 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.354123 kubelet[2267]: E0209 19:26:24.354077 2267 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76f544f57c-zfhh4" Feb 9 19:26:24.354200 kubelet[2267]: E0209 19:26:24.354146 2267 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76f544f57c-zfhh4" Feb 9 19:26:24.354257 kubelet[2267]: E0209 19:26:24.354235 2267 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76f544f57c-zfhh4_calico-system(5b12e0e6-a8af-4fcb-82b3-77090ee9a472)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76f544f57c-zfhh4_calico-system(5b12e0e6-a8af-4fcb-82b3-77090ee9a472)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76f544f57c-zfhh4" podUID=5b12e0e6-a8af-4fcb-82b3-77090ee9a472 Feb 9 19:26:24.861872 kubelet[2267]: I0209 19:26:24.861841 2267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:24.862975 env[1221]: time="2024-02-09T19:26:24.862930627Z" level=info msg="StopPodSandbox for \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\"" Feb 9 19:26:24.865995 kubelet[2267]: I0209 19:26:24.865965 2267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:24.866755 env[1221]: time="2024-02-09T19:26:24.866701445Z" level=info msg="StopPodSandbox for \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\"" Feb 9 19:26:24.869526 kubelet[2267]: I0209 19:26:24.869490 2267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:24.870416 env[1221]: time="2024-02-09T19:26:24.870370459Z" level=info msg="StopPodSandbox for \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\"" Feb 9 19:26:24.877098 env[1221]: time="2024-02-09T19:26:24.876662618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:26:24.880574 kubelet[2267]: I0209 19:26:24.879536 2267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:24.894321 env[1221]: time="2024-02-09T19:26:24.882268275Z" level=info msg="StopPodSandbox for \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\"" Feb 9 19:26:24.946559 env[1221]: time="2024-02-09T19:26:24.946478917Z" level=error msg="StopPodSandbox for \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\" failed" error="failed to destroy network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.946962 kubelet[2267]: E0209 19:26:24.946908 2267 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:24.947094 kubelet[2267]: E0209 19:26:24.947034 2267 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6} Feb 9 19:26:24.947171 kubelet[2267]: E0209 19:26:24.947113 2267 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb8e00fa-f6f2-47c4-a525-9d0f529ec36c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:26:24.947171 kubelet[2267]: E0209 19:26:24.947161 2267 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb8e00fa-f6f2-47c4-a525-9d0f529ec36c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-kzz8v" podUID=fb8e00fa-f6f2-47c4-a525-9d0f529ec36c Feb 9 19:26:24.981263 env[1221]: time="2024-02-09T19:26:24.981187114Z" level=error msg="StopPodSandbox for \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\" failed" error="failed to destroy network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.981982 kubelet[2267]: E0209 19:26:24.981727 2267 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:24.981982 kubelet[2267]: E0209 19:26:24.981800 2267 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b} Feb 9 19:26:24.981982 kubelet[2267]: E0209 19:26:24.981878 2267 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4296df02-d23c-4462-abee-22483f63c36c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:26:24.981982 kubelet[2267]: E0209 19:26:24.981927 2267 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4296df02-d23c-4462-abee-22483f63c36c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4qwfs" podUID=4296df02-d23c-4462-abee-22483f63c36c Feb 9 19:26:24.992913 env[1221]: time="2024-02-09T19:26:24.992806777Z" level=error msg="StopPodSandbox for \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\" failed" error="failed to destroy network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:24.993586 kubelet[2267]: E0209 19:26:24.993305 2267 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:24.993586 kubelet[2267]: E0209 19:26:24.993386 2267 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178} Feb 9 19:26:24.993586 kubelet[2267]: E0209 19:26:24.993488 2267 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2e3bd37-daf7-4099-845b-c1f627963705\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:26:24.993586 kubelet[2267]: E0209 19:26:24.993556 2267 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2e3bd37-daf7-4099-845b-c1f627963705\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-fcq6k" podUID=f2e3bd37-daf7-4099-845b-c1f627963705 Feb 9 19:26:25.004698 env[1221]: time="2024-02-09T19:26:25.004618291Z" level=error msg="StopPodSandbox for \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\" failed" error="failed to destroy network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:26:25.004965 kubelet[2267]: E0209 19:26:25.004939 2267 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:25.005093 kubelet[2267]: E0209 19:26:25.005000 2267 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f} Feb 9 19:26:25.005093 kubelet[2267]: E0209 19:26:25.005055 2267 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b12e0e6-a8af-4fcb-82b3-77090ee9a472\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:26:25.005249 kubelet[2267]: E0209 19:26:25.005110 2267 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b12e0e6-a8af-4fcb-82b3-77090ee9a472\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76f544f57c-zfhh4" podUID=5b12e0e6-a8af-4fcb-82b3-77090ee9a472 Feb 9 19:26:31.074876 kubelet[2267]: I0209 19:26:31.074584 2267 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 19:26:31.218000 audit[3335]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:31.238322 kernel: audit: type=1325 audit(1707506791.218:283): table=filter:109 family=2 entries=13 op=nft_register_rule pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:31.218000 audit[3335]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffcd52153d0 a2=0 a3=7ffcd52153bc items=0 ppid=2426 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:31.275310 kernel: audit: type=1300 audit(1707506791.218:283): arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffcd52153d0 a2=0 a3=7ffcd52153bc items=0 ppid=2426 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:31.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:31.309799 kernel: audit: type=1327 audit(1707506791.218:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:31.309942 kernel: audit: type=1325 audit(1707506791.218:284): table=nat:110 family=2 entries=27 op=nft_register_chain pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:31.218000 audit[3335]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:31.345150 kernel: audit: type=1300 audit(1707506791.218:284): arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffcd52153d0 a2=0 a3=7ffcd52153bc items=0 ppid=2426 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:31.218000 audit[3335]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffcd52153d0 a2=0 a3=7ffcd52153bc items=0 ppid=2426 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:31.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:31.362346 kernel: audit: type=1327 audit(1707506791.218:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:31.774994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152693158.mount: Deactivated successfully. Feb 9 19:26:31.843333 env[1221]: time="2024-02-09T19:26:31.843140398Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:31.844183 env[1221]: time="2024-02-09T19:26:31.844146254Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:31.850110 env[1221]: time="2024-02-09T19:26:31.850056327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:31.852121 env[1221]: time="2024-02-09T19:26:31.852080385Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:31.853040 env[1221]: time="2024-02-09T19:26:31.852989817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 19:26:31.873364 env[1221]: time="2024-02-09T19:26:31.865758239Z" level=info msg="CreateContainer within sandbox \"97f3275c87fda520bd6a02c24e545611ded03db91d57c72bbf6615e5f5cf046d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:26:31.898941 env[1221]: time="2024-02-09T19:26:31.898849067Z" level=info msg="CreateContainer within sandbox \"97f3275c87fda520bd6a02c24e545611ded03db91d57c72bbf6615e5f5cf046d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"70f0f32a7febb78233af606056990407f65e20b485ef0338d6bc7630fdd27813\"" Feb 9 19:26:31.906212 env[1221]: time="2024-02-09T19:26:31.906117775Z" level=info msg="StartContainer for \"70f0f32a7febb78233af606056990407f65e20b485ef0338d6bc7630fdd27813\"" Feb 9 19:26:31.998174 env[1221]: time="2024-02-09T19:26:31.998117658Z" level=info msg="StartContainer for \"70f0f32a7febb78233af606056990407f65e20b485ef0338d6bc7630fdd27813\" returns successfully" Feb 9 19:26:32.124545 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:26:32.124735 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:26:32.942000 kubelet[2267]: I0209 19:26:32.941955 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-mdqjh" podStartSLOduration=-9.223372013912886e+09 pod.CreationTimestamp="2024-02-09 19:26:10 +0000 UTC" firstStartedPulling="2024-02-09 19:26:10.962608698 +0000 UTC m=+19.920286709" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:32.940402141 +0000 UTC m=+41.898080176" watchObservedRunningTime="2024-02-09 19:26:32.941889761 +0000 UTC m=+41.899567797" Feb 9 19:26:33.645000 audit[3477]: AVC avc: denied { write } for pid=3477 comm="tee" name="fd" dev="proc" ino=24427 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.668321 kernel: audit: type=1400 audit(1707506793.645:285): avc: denied { write } for pid=3477 comm="tee" name="fd" dev="proc" ino=24427 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.673000 audit[3472]: AVC avc: denied { write } for pid=3472 comm="tee" name="fd" dev="proc" ino=24432 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.697364 kernel: audit: type=1400 audit(1707506793.673:286): avc: denied { write } for pid=3472 comm="tee" name="fd" dev="proc" ino=24432 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.697542 kernel: audit: type=1400 audit(1707506793.695:287): avc: denied { write } for pid=3466 comm="tee" name="fd" dev="proc" ino=24435 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.695000 audit[3466]: AVC avc: denied { write } for pid=3466 comm="tee" name="fd" dev="proc" ino=24435 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.695000 audit[3466]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb44b591e a2=241 a3=1b6 items=1 ppid=3430 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:33.759326 kernel: audit: type=1300 audit(1707506793.695:287): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb44b591e a2=241 a3=1b6 items=1 ppid=3430 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:33.695000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:26:33.695000 audit: PATH item=0 name="/dev/fd/63" inode=24412 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:26:33.695000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:26:33.673000 audit[3472]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd931092f a2=241 a3=1b6 items=1 ppid=3434 pid=3472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:33.673000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:26:33.673000 audit: PATH item=0 name="/dev/fd/63" inode=24417 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:26:33.673000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:26:33.645000 audit[3477]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed799692d a2=241 a3=1b6 items=1 ppid=3440 pid=3477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:33.645000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:26:33.645000 audit: PATH item=0 name="/dev/fd/63" inode=24422 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:26:33.645000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:26:33.778000 audit[3502]: AVC avc: denied { write } for pid=3502 comm="tee" name="fd" dev="proc" ino=24447 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.788000 audit[3508]: AVC avc: denied { write } for pid=3508 comm="tee" name="fd" dev="proc" ino=24953 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.788000 audit[3508]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff4dc6292d a2=241 a3=1b6 items=1 ppid=3436 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:33.788000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:26:33.788000 audit: PATH item=0 name="/dev/fd/63" inode=24452 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:26:33.788000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:26:33.778000 audit[3502]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd21c2492d a2=241 a3=1b6 items=1 ppid=3453 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:33.778000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:26:33.778000 audit: PATH item=0 name="/dev/fd/63" inode=24441 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:26:33.778000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:26:33.802000 audit[3501]: AVC avc: denied { write } for pid=3501 comm="tee" name="fd" dev="proc" ino=24957 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.802000 audit[3501]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcc9a0991d a2=241 a3=1b6 items=1 ppid=3431 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:33.802000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:26:33.802000 audit: PATH item=0 name="/dev/fd/63" inode=24442 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:26:33.802000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:26:33.810000 audit[3506]: AVC avc: denied { write } for pid=3506 comm="tee" name="fd" dev="proc" ino=24961 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:26:33.810000 audit[3506]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff7adad92e a2=241 a3=1b6 items=1 ppid=3444 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:33.810000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:26:33.810000 audit: PATH item=0 name="/dev/fd/63" inode=24451 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:26:33.810000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit: BPF prog-id=10 op=LOAD Feb 9 19:26:34.408000 audit[3586]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff4010ceb0 a2=70 a3=7f78e4c10000 items=0 ppid=3454 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.408000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:26:34.408000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.408000 audit: BPF prog-id=11 op=LOAD Feb 9 19:26:34.408000 audit[3586]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff4010ceb0 a2=70 a3=6e items=0 ppid=3454 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.408000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:26:34.409000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff4010ce60 a2=70 a3=7fff4010ceb0 items=0 ppid=3454 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.409000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit: BPF prog-id=12 op=LOAD Feb 9 19:26:34.409000 audit[3586]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff4010ce40 a2=70 a3=7fff4010ceb0 items=0 ppid=3454 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.409000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:26:34.409000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff4010cf20 a2=70 a3=0 items=0 ppid=3454 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.409000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff4010cf10 a2=70 a3=0 items=0 ppid=3454 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.409000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:26:34.409000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.409000 audit[3586]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff4010cf50 a2=70 a3=0 items=0 ppid=3454 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.409000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { perfmon } for pid=3586 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit[3586]: AVC avc: denied { bpf } for pid=3586 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.410000 audit: BPF prog-id=13 op=LOAD Feb 9 19:26:34.410000 audit[3586]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff4010ce70 a2=70 a3=ffffffff items=0 ppid=3454 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.410000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:26:34.417000 audit[3591]: AVC avc: denied { bpf } for pid=3591 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.417000 audit[3591]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd297a2b20 a2=70 a3=fff80800 items=0 ppid=3454 pid=3591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.417000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:26:34.417000 audit[3591]: AVC avc: denied { bpf } for pid=3591 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:26:34.417000 audit[3591]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd297a29f0 a2=70 a3=3 items=0 ppid=3454 pid=3591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.417000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:26:34.424000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:26:34.516000 audit[3614]: NETFILTER_CFG table=raw:111 family=2 entries=19 op=nft_register_chain pid=3614 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:34.516000 audit[3614]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7ffc6f9771e0 a2=0 a3=7ffc6f9771cc items=0 ppid=3454 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.516000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:34.519000 audit[3616]: NETFILTER_CFG table=mangle:112 family=2 entries=19 op=nft_register_chain pid=3616 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:34.519000 audit[3616]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffd7b89d9d0 a2=0 a3=55567c916000 items=0 ppid=3454 pid=3616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.519000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:34.521000 audit[3615]: NETFILTER_CFG table=nat:113 family=2 entries=16 op=nft_register_chain pid=3615 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:34.521000 audit[3615]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7ffe1efa5830 a2=0 a3=556515e70000 items=0 ppid=3454 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.521000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:34.528000 audit[3621]: NETFILTER_CFG table=filter:114 family=2 entries=39 op=nft_register_chain pid=3621 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:34.528000 audit[3621]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffd20f51470 a2=0 a3=7ffd20f5145c items=0 ppid=3454 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:34.528000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:35.159105 systemd-networkd[1083]: vxlan.calico: Link UP Feb 9 19:26:35.159116 systemd-networkd[1083]: vxlan.calico: Gained carrier Feb 9 19:26:35.717693 env[1221]: time="2024-02-09T19:26:35.717633791Z" level=info msg="StopPodSandbox for \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\"" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.770 [INFO][3647] k8s.go 578: Cleaning up netns ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.770 [INFO][3647] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" iface="eth0" netns="/var/run/netns/cni-d66222ce-2e58-05ee-5b74-d6671c0492d4" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.771 [INFO][3647] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" iface="eth0" netns="/var/run/netns/cni-d66222ce-2e58-05ee-5b74-d6671c0492d4" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.771 [INFO][3647] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" iface="eth0" netns="/var/run/netns/cni-d66222ce-2e58-05ee-5b74-d6671c0492d4" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.771 [INFO][3647] k8s.go 585: Releasing IP address(es) ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.771 [INFO][3647] utils.go 188: Calico CNI releasing IP address ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.796 [INFO][3653] ipam_plugin.go 415: Releasing address using handleID ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" HandleID="k8s-pod-network.ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.797 [INFO][3653] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.797 [INFO][3653] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.807 [WARNING][3653] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" HandleID="k8s-pod-network.ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.807 [INFO][3653] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" HandleID="k8s-pod-network.ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.809 [INFO][3653] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:35.813030 env[1221]: 2024-02-09 19:26:35.811 [INFO][3647] k8s.go 591: Teardown processing complete. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:35.816182 env[1221]: time="2024-02-09T19:26:35.813226716Z" level=info msg="TearDown network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\" successfully" Feb 9 19:26:35.816182 env[1221]: time="2024-02-09T19:26:35.813271765Z" level=info msg="StopPodSandbox for \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\" returns successfully" Feb 9 19:26:35.816948 systemd[1]: run-netns-cni\x2dd66222ce\x2d2e58\x2d05ee\x2d5b74\x2dd6671c0492d4.mount: Deactivated successfully. Feb 9 19:26:35.820015 env[1221]: time="2024-02-09T19:26:35.819962652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f544f57c-zfhh4,Uid:5b12e0e6-a8af-4fcb-82b3-77090ee9a472,Namespace:calico-system,Attempt:1,}" Feb 9 19:26:35.997017 systemd-networkd[1083]: cali34377f70188: Link UP Feb 9 19:26:36.013756 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:26:36.013890 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali34377f70188: link becomes ready Feb 9 19:26:36.015540 systemd-networkd[1083]: cali34377f70188: Gained carrier Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.905 [INFO][3660] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0 calico-kube-controllers-76f544f57c- calico-system 5b12e0e6-a8af-4fcb-82b3-77090ee9a472 682 0 2024-02-09 19:26:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76f544f57c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal calico-kube-controllers-76f544f57c-zfhh4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali34377f70188 [] []}} ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Namespace="calico-system" Pod="calico-kube-controllers-76f544f57c-zfhh4" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.905 [INFO][3660] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Namespace="calico-system" Pod="calico-kube-controllers-76f544f57c-zfhh4" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.947 [INFO][3672] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" HandleID="k8s-pod-network.69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.958 [INFO][3672] ipam_plugin.go 268: Auto assigning IP ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" HandleID="k8s-pod-network.69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000513c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", "pod":"calico-kube-controllers-76f544f57c-zfhh4", "timestamp":"2024-02-09 19:26:35.947416791 +0000 UTC"}, Hostname:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.958 [INFO][3672] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.958 [INFO][3672] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.958 [INFO][3672] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal' Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.960 [INFO][3672] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.964 [INFO][3672] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.969 [INFO][3672] ipam.go 489: Trying affinity for 192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.972 [INFO][3672] ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.975 [INFO][3672] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.975 [INFO][3672] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.977 [INFO][3672] ipam.go 1682: Creating new handle: k8s-pod-network.69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.982 [INFO][3672] ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.988 [INFO][3672] ipam.go 1216: Successfully claimed IPs: [192.168.113.129/26] block=192.168.113.128/26 handle="k8s-pod-network.69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.988 [INFO][3672] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.129/26] handle="k8s-pod-network.69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.988 [INFO][3672] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:36.030360 env[1221]: 2024-02-09 19:26:35.988 [INFO][3672] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.113.129/26] IPv6=[] ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" HandleID="k8s-pod-network.69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:36.031602 env[1221]: 2024-02-09 19:26:35.992 [INFO][3660] k8s.go 385: Populated endpoint ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Namespace="calico-system" Pod="calico-kube-controllers-76f544f57c-zfhh4" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0", GenerateName:"calico-kube-controllers-76f544f57c-", Namespace:"calico-system", SelfLink:"", UID:"5b12e0e6-a8af-4fcb-82b3-77090ee9a472", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76f544f57c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-76f544f57c-zfhh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34377f70188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:36.031602 env[1221]: 2024-02-09 19:26:35.992 [INFO][3660] k8s.go 386: Calico CNI using IPs: [192.168.113.129/32] ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Namespace="calico-system" Pod="calico-kube-controllers-76f544f57c-zfhh4" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:36.031602 env[1221]: 2024-02-09 19:26:35.992 [INFO][3660] dataplane_linux.go 68: Setting the host side veth name to cali34377f70188 ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Namespace="calico-system" Pod="calico-kube-controllers-76f544f57c-zfhh4" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:36.031602 env[1221]: 2024-02-09 19:26:36.014 [INFO][3660] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Namespace="calico-system" Pod="calico-kube-controllers-76f544f57c-zfhh4" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:36.031602 env[1221]: 2024-02-09 19:26:36.015 [INFO][3660] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Namespace="calico-system" Pod="calico-kube-controllers-76f544f57c-zfhh4" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0", GenerateName:"calico-kube-controllers-76f544f57c-", Namespace:"calico-system", SelfLink:"", UID:"5b12e0e6-a8af-4fcb-82b3-77090ee9a472", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76f544f57c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c", Pod:"calico-kube-controllers-76f544f57c-zfhh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34377f70188", MAC:"96:5a:61:4e:31:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:36.031602 env[1221]: 2024-02-09 19:26:36.025 [INFO][3660] k8s.go 491: Wrote updated endpoint to datastore ContainerID="69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c" Namespace="calico-system" Pod="calico-kube-controllers-76f544f57c-zfhh4" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:36.065893 env[1221]: time="2024-02-09T19:26:36.064879096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:36.065893 env[1221]: time="2024-02-09T19:26:36.064932201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:36.065893 env[1221]: time="2024-02-09T19:26:36.064951559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:36.065893 env[1221]: time="2024-02-09T19:26:36.065151063Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c pid=3699 runtime=io.containerd.runc.v2 Feb 9 19:26:36.089000 audit[3716]: NETFILTER_CFG table=filter:115 family=2 entries=36 op=nft_register_chain pid=3716 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:36.089000 audit[3716]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7ffe775b8070 a2=0 a3=7ffe775b805c items=0 ppid=3454 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:36.089000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:36.168241 env[1221]: time="2024-02-09T19:26:36.168184469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76f544f57c-zfhh4,Uid:5b12e0e6-a8af-4fcb-82b3-77090ee9a472,Namespace:calico-system,Attempt:1,} returns sandbox id \"69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c\"" Feb 9 19:26:36.172332 env[1221]: time="2024-02-09T19:26:36.170646827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 19:26:36.519635 systemd-networkd[1083]: vxlan.calico: Gained IPv6LL Feb 9 19:26:37.147122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962283176.mount: Deactivated successfully. Feb 9 19:26:37.671996 systemd-networkd[1083]: cali34377f70188: Gained IPv6LL Feb 9 19:26:38.721837 env[1221]: time="2024-02-09T19:26:38.720266658Z" level=info msg="StopPodSandbox for \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\"" Feb 9 19:26:38.735970 env[1221]: time="2024-02-09T19:26:38.735921246Z" level=info msg="StopPodSandbox for \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\"" Feb 9 19:26:38.736877 env[1221]: time="2024-02-09T19:26:38.736831464Z" level=info msg="StopPodSandbox for \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\"" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.851 [INFO][3771] k8s.go 578: Cleaning up netns ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.851 [INFO][3771] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" iface="eth0" netns="/var/run/netns/cni-fd06d1fc-77de-9121-b968-e960c5135abb" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.851 [INFO][3771] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" iface="eth0" netns="/var/run/netns/cni-fd06d1fc-77de-9121-b968-e960c5135abb" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.852 [INFO][3771] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" iface="eth0" netns="/var/run/netns/cni-fd06d1fc-77de-9121-b968-e960c5135abb" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.852 [INFO][3771] k8s.go 585: Releasing IP address(es) ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.852 [INFO][3771] utils.go 188: Calico CNI releasing IP address ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.978 [INFO][3794] ipam_plugin.go 415: Releasing address using handleID ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" HandleID="k8s-pod-network.82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.983 [INFO][3794] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.983 [INFO][3794] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.995 [WARNING][3794] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" HandleID="k8s-pod-network.82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:38.999 [INFO][3794] ipam_plugin.go 443: Releasing address using workloadID ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" HandleID="k8s-pod-network.82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:39.000 [INFO][3794] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:39.004502 env[1221]: 2024-02-09 19:26:39.002 [INFO][3771] k8s.go 591: Teardown processing complete. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:39.010114 systemd[1]: run-netns-cni\x2dfd06d1fc\x2d77de\x2d9121\x2db968\x2de960c5135abb.mount: Deactivated successfully. Feb 9 19:26:39.011060 env[1221]: time="2024-02-09T19:26:39.011011788Z" level=info msg="TearDown network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\" successfully" Feb 9 19:26:39.011187 env[1221]: time="2024-02-09T19:26:39.011162312Z" level=info msg="StopPodSandbox for \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\" returns successfully" Feb 9 19:26:39.012185 env[1221]: time="2024-02-09T19:26:39.012152418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4qwfs,Uid:4296df02-d23c-4462-abee-22483f63c36c,Namespace:calico-system,Attempt:1,}" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:38.899 [INFO][3785] k8s.go 578: Cleaning up netns ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:38.900 [INFO][3785] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" iface="eth0" netns="/var/run/netns/cni-038b26be-c9e8-6832-aadc-d0df1e31c6aa" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:38.901 [INFO][3785] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" iface="eth0" netns="/var/run/netns/cni-038b26be-c9e8-6832-aadc-d0df1e31c6aa" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:38.901 [INFO][3785] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" iface="eth0" netns="/var/run/netns/cni-038b26be-c9e8-6832-aadc-d0df1e31c6aa" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:38.901 [INFO][3785] k8s.go 585: Releasing IP address(es) ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:38.901 [INFO][3785] utils.go 188: Calico CNI releasing IP address ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:39.061 [INFO][3802] ipam_plugin.go 415: Releasing address using handleID ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" HandleID="k8s-pod-network.df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:39.063 [INFO][3802] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:39.063 [INFO][3802] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:39.074 [WARNING][3802] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" HandleID="k8s-pod-network.df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:39.074 [INFO][3802] ipam_plugin.go 443: Releasing address using workloadID ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" HandleID="k8s-pod-network.df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:39.076 [INFO][3802] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:39.080940 env[1221]: 2024-02-09 19:26:39.079 [INFO][3785] k8s.go 591: Teardown processing complete. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:39.087282 systemd[1]: run-netns-cni\x2d038b26be\x2dc9e8\x2d6832\x2daadc\x2dd0df1e31c6aa.mount: Deactivated successfully. Feb 9 19:26:39.088015 env[1221]: time="2024-02-09T19:26:39.087964592Z" level=info msg="TearDown network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\" successfully" Feb 9 19:26:39.088187 env[1221]: time="2024-02-09T19:26:39.088160829Z" level=info msg="StopPodSandbox for \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\" returns successfully" Feb 9 19:26:39.089337 env[1221]: time="2024-02-09T19:26:39.089272921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fcq6k,Uid:f2e3bd37-daf7-4099-845b-c1f627963705,Namespace:kube-system,Attempt:1,}" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:38.944 [INFO][3783] k8s.go 578: Cleaning up netns ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:38.945 [INFO][3783] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" iface="eth0" netns="/var/run/netns/cni-32ad5f59-50de-876a-9658-444b69bdb928" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:38.945 [INFO][3783] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" iface="eth0" netns="/var/run/netns/cni-32ad5f59-50de-876a-9658-444b69bdb928" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:38.945 [INFO][3783] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" iface="eth0" netns="/var/run/netns/cni-32ad5f59-50de-876a-9658-444b69bdb928" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:38.945 [INFO][3783] k8s.go 585: Releasing IP address(es) ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:38.945 [INFO][3783] utils.go 188: Calico CNI releasing IP address ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:39.128 [INFO][3807] ipam_plugin.go 415: Releasing address using handleID ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" HandleID="k8s-pod-network.604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:39.128 [INFO][3807] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:39.129 [INFO][3807] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:39.139 [WARNING][3807] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" HandleID="k8s-pod-network.604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:39.139 [INFO][3807] ipam_plugin.go 443: Releasing address using workloadID ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" HandleID="k8s-pod-network.604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:39.141 [INFO][3807] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:39.147871 env[1221]: 2024-02-09 19:26:39.145 [INFO][3783] k8s.go 591: Teardown processing complete. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:39.148982 env[1221]: time="2024-02-09T19:26:39.148918667Z" level=info msg="TearDown network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\" successfully" Feb 9 19:26:39.149161 env[1221]: time="2024-02-09T19:26:39.149130251Z" level=info msg="StopPodSandbox for \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\" returns successfully" Feb 9 19:26:39.150389 env[1221]: time="2024-02-09T19:26:39.150351722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kzz8v,Uid:fb8e00fa-f6f2-47c4-a525-9d0f529ec36c,Namespace:kube-system,Attempt:1,}" Feb 9 19:26:39.414381 systemd-networkd[1083]: cali3b1764cfcb4: Link UP Feb 9 19:26:39.440476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:26:39.440592 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3b1764cfcb4: link becomes ready Feb 9 19:26:39.450109 systemd-networkd[1083]: cali3b1764cfcb4: Gained carrier Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.181 [INFO][3813] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0 csi-node-driver- calico-system 4296df02-d23c-4462-abee-22483f63c36c 695 0 2024-02-09 19:26:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal csi-node-driver-4qwfs eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali3b1764cfcb4 [] []}} ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Namespace="calico-system" Pod="csi-node-driver-4qwfs" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.181 [INFO][3813] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Namespace="calico-system" Pod="csi-node-driver-4qwfs" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.310 [INFO][3844] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" HandleID="k8s-pod-network.e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.330 [INFO][3844] ipam_plugin.go 268: Auto assigning IP ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" HandleID="k8s-pod-network.e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002be8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", "pod":"csi-node-driver-4qwfs", "timestamp":"2024-02-09 19:26:39.310842772 +0000 UTC"}, Hostname:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.331 [INFO][3844] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.331 [INFO][3844] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.331 [INFO][3844] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal' Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.333 [INFO][3844] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.338 [INFO][3844] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.343 [INFO][3844] ipam.go 489: Trying affinity for 192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.345 [INFO][3844] ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.348 [INFO][3844] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.348 [INFO][3844] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.351 [INFO][3844] ipam.go 1682: Creating new handle: k8s-pod-network.e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520 Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.362 [INFO][3844] ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.386 [INFO][3844] ipam.go 1216: Successfully claimed IPs: [192.168.113.130/26] block=192.168.113.128/26 handle="k8s-pod-network.e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.386 [INFO][3844] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.130/26] handle="k8s-pod-network.e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.386 [INFO][3844] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:39.486730 env[1221]: 2024-02-09 19:26:39.386 [INFO][3844] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.113.130/26] IPv6=[] ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" HandleID="k8s-pod-network.e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.487964 env[1221]: 2024-02-09 19:26:39.391 [INFO][3813] k8s.go 385: Populated endpoint ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Namespace="calico-system" Pod="csi-node-driver-4qwfs" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4296df02-d23c-4462-abee-22483f63c36c", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-4qwfs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3b1764cfcb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:39.487964 env[1221]: 2024-02-09 19:26:39.392 [INFO][3813] k8s.go 386: Calico CNI using IPs: [192.168.113.130/32] ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Namespace="calico-system" Pod="csi-node-driver-4qwfs" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.487964 env[1221]: 2024-02-09 19:26:39.392 [INFO][3813] dataplane_linux.go 68: Setting the host side veth name to cali3b1764cfcb4 ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Namespace="calico-system" Pod="csi-node-driver-4qwfs" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.487964 env[1221]: 2024-02-09 19:26:39.449 [INFO][3813] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Namespace="calico-system" Pod="csi-node-driver-4qwfs" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.487964 env[1221]: 2024-02-09 19:26:39.456 [INFO][3813] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Namespace="calico-system" Pod="csi-node-driver-4qwfs" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4296df02-d23c-4462-abee-22483f63c36c", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520", Pod:"csi-node-driver-4qwfs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3b1764cfcb4", MAC:"9e:b8:b3:b3:a0:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:39.487964 env[1221]: 2024-02-09 19:26:39.474 [INFO][3813] k8s.go 491: Wrote updated endpoint to datastore ContainerID="e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520" Namespace="calico-system" Pod="csi-node-driver-4qwfs" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:39.630812 systemd-networkd[1083]: caliaff8286279a: Link UP Feb 9 19:26:39.646417 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaff8286279a: link becomes ready Feb 9 19:26:39.650339 systemd-networkd[1083]: caliaff8286279a: Gained carrier Feb 9 19:26:39.652966 env[1221]: time="2024-02-09T19:26:39.652883937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:39.653179 env[1221]: time="2024-02-09T19:26:39.653141235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:39.660462 env[1221]: time="2024-02-09T19:26:39.660398726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:39.684033 env[1221]: time="2024-02-09T19:26:39.673335142Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520 pid=3899 runtime=io.containerd.runc.v2 Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.252 [INFO][3829] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0 coredns-787d4945fb- kube-system f2e3bd37-daf7-4099-845b-c1f627963705 696 0 2024-02-09 19:26:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal coredns-787d4945fb-fcq6k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaff8286279a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Namespace="kube-system" Pod="coredns-787d4945fb-fcq6k" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.253 [INFO][3829] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Namespace="kube-system" Pod="coredns-787d4945fb-fcq6k" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.528 [INFO][3857] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" HandleID="k8s-pod-network.c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.564 [INFO][3857] ipam_plugin.go 268: Auto assigning IP ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" HandleID="k8s-pod-network.c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", "pod":"coredns-787d4945fb-fcq6k", "timestamp":"2024-02-09 19:26:39.528794047 +0000 UTC"}, Hostname:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.564 [INFO][3857] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.564 [INFO][3857] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.565 [INFO][3857] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal' Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.568 [INFO][3857] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.577 [INFO][3857] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.582 [INFO][3857] ipam.go 489: Trying affinity for 192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.584 [INFO][3857] ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.587 [INFO][3857] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.587 [INFO][3857] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.595 [INFO][3857] ipam.go 1682: Creating new handle: k8s-pod-network.c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.604 [INFO][3857] ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.615 [INFO][3857] ipam.go 1216: Successfully claimed IPs: [192.168.113.131/26] block=192.168.113.128/26 handle="k8s-pod-network.c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.615 [INFO][3857] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.131/26] handle="k8s-pod-network.c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.615 [INFO][3857] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:39.687610 env[1221]: 2024-02-09 19:26:39.615 [INFO][3857] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.113.131/26] IPv6=[] ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" HandleID="k8s-pod-network.c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.688859 env[1221]: 2024-02-09 19:26:39.621 [INFO][3829] k8s.go 385: Populated endpoint ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Namespace="kube-system" Pod="coredns-787d4945fb-fcq6k" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f2e3bd37-daf7-4099-845b-c1f627963705", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-787d4945fb-fcq6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaff8286279a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:39.688859 env[1221]: 2024-02-09 19:26:39.622 [INFO][3829] k8s.go 386: Calico CNI using IPs: [192.168.113.131/32] ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Namespace="kube-system" Pod="coredns-787d4945fb-fcq6k" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.688859 env[1221]: 2024-02-09 19:26:39.622 [INFO][3829] dataplane_linux.go 68: Setting the host side veth name to caliaff8286279a ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Namespace="kube-system" Pod="coredns-787d4945fb-fcq6k" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.688859 env[1221]: 2024-02-09 19:26:39.653 [INFO][3829] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Namespace="kube-system" Pod="coredns-787d4945fb-fcq6k" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.688859 env[1221]: 2024-02-09 19:26:39.659 [INFO][3829] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Namespace="kube-system" Pod="coredns-787d4945fb-fcq6k" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f2e3bd37-daf7-4099-845b-c1f627963705", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c", Pod:"coredns-787d4945fb-fcq6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaff8286279a", MAC:"0a:30:60:72:23:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:39.688859 env[1221]: 2024-02-09 19:26:39.676 [INFO][3829] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c" Namespace="kube-system" Pod="coredns-787d4945fb-fcq6k" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:39.718818 kernel: kauditd_printk_skb: 117 callbacks suppressed Feb 9 19:26:39.718972 kernel: audit: type=1325 audit(1707506799.695:311): table=filter:116 family=2 entries=34 op=nft_register_chain pid=3916 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:39.695000 audit[3916]: NETFILTER_CFG table=filter:116 family=2 entries=34 op=nft_register_chain pid=3916 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:39.781749 kernel: audit: type=1300 audit(1707506799.695:311): arch=c000003e syscall=46 success=yes exit=18320 a0=3 a1=7ffef00f2e60 a2=0 a3=7ffef00f2e4c items=0 ppid=3454 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.782551 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif589e4c247a: link becomes ready Feb 9 19:26:39.782612 kernel: audit: type=1327 audit(1707506799.695:311): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:39.695000 audit[3916]: SYSCALL arch=c000003e syscall=46 success=yes exit=18320 a0=3 a1=7ffef00f2e60 a2=0 a3=7ffef00f2e4c items=0 ppid=3454 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.695000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:39.782364 systemd-networkd[1083]: calif589e4c247a: Link UP Feb 9 19:26:39.815000 audit[3937]: NETFILTER_CFG table=filter:117 family=2 entries=50 op=nft_register_chain pid=3937 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:39.819495 systemd-networkd[1083]: calif589e4c247a: Gained carrier Feb 9 19:26:39.835318 kernel: audit: type=1325 audit(1707506799.815:312): table=filter:117 family=2 entries=50 op=nft_register_chain pid=3937 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:39.815000 audit[3937]: SYSCALL arch=c000003e syscall=46 success=yes exit=25136 a0=3 a1=7ffc9c2594b0 a2=0 a3=7ffc9c25949c items=0 ppid=3454 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.864831 env[1221]: time="2024-02-09T19:26:39.860905094Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:39.866780 env[1221]: time="2024-02-09T19:26:39.866732087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:39.874672 env[1221]: time="2024-02-09T19:26:39.874632229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:39.878974 env[1221]: time="2024-02-09T19:26:39.878922387Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:39.879917 env[1221]: time="2024-02-09T19:26:39.879873250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 9 19:26:39.882315 kernel: audit: type=1300 audit(1707506799.815:312): arch=c000003e syscall=46 success=yes exit=25136 a0=3 a1=7ffc9c2594b0 a2=0 a3=7ffc9c25949c items=0 ppid=3454 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.815000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.300 [INFO][3839] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0 coredns-787d4945fb- kube-system fb8e00fa-f6f2-47c4-a525-9d0f529ec36c 697 0 2024-02-09 19:26:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal coredns-787d4945fb-kzz8v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif589e4c247a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Namespace="kube-system" Pod="coredns-787d4945fb-kzz8v" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.300 [INFO][3839] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Namespace="kube-system" Pod="coredns-787d4945fb-kzz8v" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.550 [INFO][3865] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" HandleID="k8s-pod-network.f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.580 [INFO][3865] ipam_plugin.go 268: Auto assigning IP ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" HandleID="k8s-pod-network.f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000500fa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", "pod":"coredns-787d4945fb-kzz8v", "timestamp":"2024-02-09 19:26:39.550227054 +0000 UTC"}, Hostname:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.580 [INFO][3865] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.616 [INFO][3865] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.616 [INFO][3865] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal' Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.619 [INFO][3865] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.647 [INFO][3865] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.669 [INFO][3865] ipam.go 489: Trying affinity for 192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.675 [INFO][3865] ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.679 [INFO][3865] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.679 [INFO][3865] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.686 [INFO][3865] ipam.go 1682: Creating new handle: k8s-pod-network.f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88 Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.695 [INFO][3865] ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.705 [INFO][3865] ipam.go 1216: Successfully claimed IPs: [192.168.113.132/26] block=192.168.113.128/26 handle="k8s-pod-network.f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.705 [INFO][3865] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.132/26] handle="k8s-pod-network.f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.706 [INFO][3865] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:39.910444 env[1221]: 2024-02-09 19:26:39.706 [INFO][3865] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.113.132/26] IPv6=[] ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" HandleID="k8s-pod-network.f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.911523 env[1221]: 2024-02-09 19:26:39.715 [INFO][3839] k8s.go 385: Populated endpoint ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Namespace="kube-system" Pod="coredns-787d4945fb-kzz8v" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"fb8e00fa-f6f2-47c4-a525-9d0f529ec36c", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-787d4945fb-kzz8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif589e4c247a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:39.911523 env[1221]: 2024-02-09 19:26:39.719 [INFO][3839] k8s.go 386: Calico CNI using IPs: [192.168.113.132/32] ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Namespace="kube-system" Pod="coredns-787d4945fb-kzz8v" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.911523 env[1221]: 2024-02-09 19:26:39.719 [INFO][3839] dataplane_linux.go 68: Setting the host side veth name to calif589e4c247a ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Namespace="kube-system" Pod="coredns-787d4945fb-kzz8v" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.911523 env[1221]: 2024-02-09 19:26:39.820 [INFO][3839] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Namespace="kube-system" Pod="coredns-787d4945fb-kzz8v" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.911523 env[1221]: 2024-02-09 19:26:39.841 [INFO][3839] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Namespace="kube-system" Pod="coredns-787d4945fb-kzz8v" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"fb8e00fa-f6f2-47c4-a525-9d0f529ec36c", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88", Pod:"coredns-787d4945fb-kzz8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif589e4c247a", MAC:"0a:70:f1:0e:ba:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:39.911523 env[1221]: 2024-02-09 19:26:39.894 [INFO][3839] k8s.go 491: Wrote updated endpoint to datastore ContainerID="f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88" Namespace="kube-system" Pod="coredns-787d4945fb-kzz8v" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:39.915469 kernel: audit: type=1327 audit(1707506799.815:312): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:39.919145 env[1221]: time="2024-02-09T19:26:39.919092905Z" level=info msg="CreateContainer within sandbox \"69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 19:26:39.947558 env[1221]: time="2024-02-09T19:26:39.947221712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:39.947558 env[1221]: time="2024-02-09T19:26:39.947272333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:39.947558 env[1221]: time="2024-02-09T19:26:39.947305277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:39.948664 env[1221]: time="2024-02-09T19:26:39.948548428Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c pid=3956 runtime=io.containerd.runc.v2 Feb 9 19:26:39.972220 kernel: audit: type=1325 audit(1707506799.952:313): table=filter:118 family=2 entries=34 op=nft_register_chain pid=3971 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:39.952000 audit[3971]: NETFILTER_CFG table=filter:118 family=2 entries=34 op=nft_register_chain pid=3971 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:39.952000 audit[3971]: SYSCALL arch=c000003e syscall=46 success=yes exit=17884 a0=3 a1=7fff80a40880 a2=0 a3=7fff80a4086c items=0 ppid=3454 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.990646 env[1221]: time="2024-02-09T19:26:39.990594494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4qwfs,Uid:4296df02-d23c-4462-abee-22483f63c36c,Namespace:calico-system,Attempt:1,} returns sandbox id \"e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520\"" Feb 9 19:26:39.992812 env[1221]: time="2024-02-09T19:26:39.992777562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:26:40.013866 kernel: audit: type=1300 audit(1707506799.952:313): arch=c000003e syscall=46 success=yes exit=17884 a0=3 a1=7fff80a40880 a2=0 a3=7fff80a4086c items=0 ppid=3454 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:39.952000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:40.035723 systemd[1]: run-netns-cni\x2d32ad5f59\x2d50de\x2d876a\x2d9658\x2d444b69bdb928.mount: Deactivated successfully. Feb 9 19:26:40.046332 kernel: audit: type=1327 audit(1707506799.952:313): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:40.055105 env[1221]: time="2024-02-09T19:26:40.047069525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:40.055105 env[1221]: time="2024-02-09T19:26:40.047129502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:40.055105 env[1221]: time="2024-02-09T19:26:40.047156153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:40.055105 env[1221]: time="2024-02-09T19:26:40.051195795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88 pid=4003 runtime=io.containerd.runc.v2 Feb 9 19:26:40.079680 env[1221]: time="2024-02-09T19:26:40.075575251Z" level=info msg="CreateContainer within sandbox \"69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"510b05576ebee6d96d8bdf2244d4f3074861e04e769fd094f51354ca5fa14d1a\"" Feb 9 19:26:40.079680 env[1221]: time="2024-02-09T19:26:40.076762497Z" level=info msg="StartContainer for \"510b05576ebee6d96d8bdf2244d4f3074861e04e769fd094f51354ca5fa14d1a\"" Feb 9 19:26:40.104888 systemd[1]: run-containerd-runc-k8s.io-f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88-runc.8iEjyz.mount: Deactivated successfully. Feb 9 19:26:40.157415 env[1221]: time="2024-02-09T19:26:40.157363491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-fcq6k,Uid:f2e3bd37-daf7-4099-845b-c1f627963705,Namespace:kube-system,Attempt:1,} returns sandbox id \"c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c\"" Feb 9 19:26:40.162239 env[1221]: time="2024-02-09T19:26:40.162194736Z" level=info msg="CreateContainer within sandbox \"c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:26:40.189114 env[1221]: time="2024-02-09T19:26:40.188254662Z" level=info msg="CreateContainer within sandbox \"c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd5faf58aa6fd66edb4a992a076f866f4ac33fc4603c387d617ff326a13490d3\"" Feb 9 19:26:40.190149 env[1221]: time="2024-02-09T19:26:40.190108891Z" level=info msg="StartContainer for \"dd5faf58aa6fd66edb4a992a076f866f4ac33fc4603c387d617ff326a13490d3\"" Feb 9 19:26:40.223907 env[1221]: time="2024-02-09T19:26:40.220652533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-kzz8v,Uid:fb8e00fa-f6f2-47c4-a525-9d0f529ec36c,Namespace:kube-system,Attempt:1,} returns sandbox id \"f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88\"" Feb 9 19:26:40.241330 env[1221]: time="2024-02-09T19:26:40.236706138Z" level=info msg="CreateContainer within sandbox \"f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:26:40.255833 env[1221]: time="2024-02-09T19:26:40.255769071Z" level=info msg="CreateContainer within sandbox \"f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fda817c66982d7c5031396593c4a5fb1fd6c696e5d81a95f4dd3befa9e9f2a1d\"" Feb 9 19:26:40.256651 env[1221]: time="2024-02-09T19:26:40.256611278Z" level=info msg="StartContainer for \"fda817c66982d7c5031396593c4a5fb1fd6c696e5d81a95f4dd3befa9e9f2a1d\"" Feb 9 19:26:40.329824 env[1221]: time="2024-02-09T19:26:40.329723667Z" level=info msg="StartContainer for \"dd5faf58aa6fd66edb4a992a076f866f4ac33fc4603c387d617ff326a13490d3\" returns successfully" Feb 9 19:26:40.415318 env[1221]: time="2024-02-09T19:26:40.415060994Z" level=info msg="StartContainer for \"fda817c66982d7c5031396593c4a5fb1fd6c696e5d81a95f4dd3befa9e9f2a1d\" returns successfully" Feb 9 19:26:40.430521 env[1221]: time="2024-02-09T19:26:40.430461951Z" level=info msg="StartContainer for \"510b05576ebee6d96d8bdf2244d4f3074861e04e769fd094f51354ca5fa14d1a\" returns successfully" Feb 9 19:26:40.872833 systemd-networkd[1083]: cali3b1764cfcb4: Gained IPv6LL Feb 9 19:26:41.031225 kubelet[2267]: I0209 19:26:41.031185 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-kzz8v" podStartSLOduration=37.031139261 pod.CreationTimestamp="2024-02-09 19:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:40.991558108 +0000 UTC m=+49.949236144" watchObservedRunningTime="2024-02-09 19:26:41.031139261 +0000 UTC m=+49.988817298" Feb 9 19:26:41.070380 kubelet[2267]: I0209 19:26:41.067167 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76f544f57c-zfhh4" podStartSLOduration=-9.223372005787666e+09 pod.CreationTimestamp="2024-02-09 19:26:10 +0000 UTC" firstStartedPulling="2024-02-09 19:26:36.169967167 +0000 UTC m=+45.127645182" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:41.063424681 +0000 UTC m=+50.021102760" watchObservedRunningTime="2024-02-09 19:26:41.067109339 +0000 UTC m=+50.024787373" Feb 9 19:26:41.080738 systemd[1]: run-containerd-runc-k8s.io-510b05576ebee6d96d8bdf2244d4f3074861e04e769fd094f51354ca5fa14d1a-runc.LGAevz.mount: Deactivated successfully. Feb 9 19:26:41.119202 kubelet[2267]: I0209 19:26:41.119157 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-fcq6k" podStartSLOduration=37.119096829 pod.CreationTimestamp="2024-02-09 19:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:41.088249571 +0000 UTC m=+50.045927608" watchObservedRunningTime="2024-02-09 19:26:41.119096829 +0000 UTC m=+50.076774865" Feb 9 19:26:41.193367 systemd-networkd[1083]: calif589e4c247a: Gained IPv6LL Feb 9 19:26:41.259000 audit[4201]: NETFILTER_CFG table=filter:119 family=2 entries=12 op=nft_register_rule pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:41.259000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff8a7fcf60 a2=0 a3=7fff8a7fcf4c items=0 ppid=2426 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:41.278457 kernel: audit: type=1325 audit(1707506801.259:314): table=filter:119 family=2 entries=12 op=nft_register_rule pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:41.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:41.261000 audit[4201]: NETFILTER_CFG table=nat:120 family=2 entries=30 op=nft_register_rule pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:41.261000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7fff8a7fcf60 a2=0 a3=7fff8a7fcf4c items=0 ppid=2426 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:41.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:41.319559 systemd-networkd[1083]: caliaff8286279a: Gained IPv6LL Feb 9 19:26:41.421026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755077809.mount: Deactivated successfully. Feb 9 19:26:41.517000 audit[4227]: NETFILTER_CFG table=filter:121 family=2 entries=9 op=nft_register_rule pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:41.517000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd9865a080 a2=0 a3=7ffd9865a06c items=0 ppid=2426 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:41.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:41.527000 audit[4227]: NETFILTER_CFG table=nat:122 family=2 entries=63 op=nft_register_chain pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:41.527000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd9865a080 a2=0 a3=7ffd9865a06c items=0 ppid=2426 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:41.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:42.112556 env[1221]: time="2024-02-09T19:26:42.112367764Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:42.116420 env[1221]: time="2024-02-09T19:26:42.116374002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:42.126827 env[1221]: time="2024-02-09T19:26:42.126779478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:42.137007 env[1221]: time="2024-02-09T19:26:42.136945368Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:42.140556 env[1221]: time="2024-02-09T19:26:42.140509420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 19:26:42.146000 env[1221]: time="2024-02-09T19:26:42.145958599Z" level=info msg="CreateContainer within sandbox \"e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:26:42.177943 env[1221]: time="2024-02-09T19:26:42.177776577Z" level=info msg="CreateContainer within sandbox \"e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c76e19a9c445667c83d096ce2b00eea77b1c6f66e5e18324226d1d3c4ae09a84\"" Feb 9 19:26:42.180342 env[1221]: time="2024-02-09T19:26:42.178738281Z" level=info msg="StartContainer for \"c76e19a9c445667c83d096ce2b00eea77b1c6f66e5e18324226d1d3c4ae09a84\"" Feb 9 19:26:42.243856 systemd[1]: run-containerd-runc-k8s.io-c76e19a9c445667c83d096ce2b00eea77b1c6f66e5e18324226d1d3c4ae09a84-runc.MQ1bw2.mount: Deactivated successfully. Feb 9 19:26:42.327042 env[1221]: time="2024-02-09T19:26:42.326983859Z" level=info msg="StartContainer for \"c76e19a9c445667c83d096ce2b00eea77b1c6f66e5e18324226d1d3c4ae09a84\" returns successfully" Feb 9 19:26:42.328813 env[1221]: time="2024-02-09T19:26:42.328770307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:26:43.828425 env[1221]: time="2024-02-09T19:26:43.828361757Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:43.831478 env[1221]: time="2024-02-09T19:26:43.831440837Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:43.833772 env[1221]: time="2024-02-09T19:26:43.833723928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:43.836162 env[1221]: time="2024-02-09T19:26:43.836125843Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:43.836881 env[1221]: time="2024-02-09T19:26:43.836832886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 19:26:43.840799 env[1221]: time="2024-02-09T19:26:43.840754589Z" level=info msg="CreateContainer within sandbox \"e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:26:43.876210 env[1221]: time="2024-02-09T19:26:43.876147078Z" level=info msg="CreateContainer within sandbox \"e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"47b8361b339787f1bd4ff4db14c155d982369950f6bc76daa544d5cd70a5cf7a\"" Feb 9 19:26:43.879113 env[1221]: time="2024-02-09T19:26:43.879066293Z" level=info msg="StartContainer for \"47b8361b339787f1bd4ff4db14c155d982369950f6bc76daa544d5cd70a5cf7a\"" Feb 9 19:26:43.880477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1948294369.mount: Deactivated successfully. Feb 9 19:26:43.944698 systemd[1]: run-containerd-runc-k8s.io-47b8361b339787f1bd4ff4db14c155d982369950f6bc76daa544d5cd70a5cf7a-runc.Al72yR.mount: Deactivated successfully. Feb 9 19:26:44.033555 env[1221]: time="2024-02-09T19:26:44.033487195Z" level=info msg="StartContainer for \"47b8361b339787f1bd4ff4db14c155d982369950f6bc76daa544d5cd70a5cf7a\" returns successfully" Feb 9 19:26:44.786080 kubelet[2267]: I0209 19:26:44.786031 2267 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:26:44.786080 kubelet[2267]: I0209 19:26:44.786072 2267 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:26:45.024390 kubelet[2267]: I0209 19:26:45.024354 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-4qwfs" podStartSLOduration=-9.22337200183049e+09 pod.CreationTimestamp="2024-02-09 19:26:10 +0000 UTC" firstStartedPulling="2024-02-09 19:26:39.992237238 +0000 UTC m=+48.949915256" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:45.022311434 +0000 UTC m=+53.979989470" watchObservedRunningTime="2024-02-09 19:26:45.024285123 +0000 UTC m=+53.981963164" Feb 9 19:26:45.598746 systemd[1]: run-containerd-runc-k8s.io-70f0f32a7febb78233af606056990407f65e20b485ef0338d6bc7630fdd27813-runc.LwDsbB.mount: Deactivated successfully. Feb 9 19:26:49.561963 kubelet[2267]: I0209 19:26:49.561916 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:49.578981 kubelet[2267]: I0209 19:26:49.578931 2267 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:26:49.709000 audit[4356]: NETFILTER_CFG table=filter:123 family=2 entries=6 op=nft_register_rule pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:49.715598 kernel: kauditd_printk_skb: 11 callbacks suppressed Feb 9 19:26:49.715682 kernel: audit: type=1325 audit(1707506809.709:318): table=filter:123 family=2 entries=6 op=nft_register_rule pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:49.709000 audit[4356]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd985fa980 a2=0 a3=7ffd985fa96c items=0 ppid=2426 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:49.758618 kubelet[2267]: I0209 19:26:49.743572 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2sgp\" (UniqueName: \"kubernetes.io/projected/1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb-kube-api-access-w2sgp\") pod \"calico-apiserver-8555cf7879-t999h\" (UID: \"1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb\") " pod="calico-apiserver/calico-apiserver-8555cf7879-t999h" Feb 9 19:26:49.758618 kubelet[2267]: I0209 19:26:49.743652 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5xg4\" (UniqueName: \"kubernetes.io/projected/204ea4bc-c919-46f9-900b-b2166b33c698-kube-api-access-q5xg4\") pod \"calico-apiserver-8555cf7879-zz4dq\" (UID: \"204ea4bc-c919-46f9-900b-b2166b33c698\") " pod="calico-apiserver/calico-apiserver-8555cf7879-zz4dq" Feb 9 19:26:49.758618 kubelet[2267]: I0209 19:26:49.743708 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/204ea4bc-c919-46f9-900b-b2166b33c698-calico-apiserver-certs\") pod \"calico-apiserver-8555cf7879-zz4dq\" (UID: \"204ea4bc-c919-46f9-900b-b2166b33c698\") " pod="calico-apiserver/calico-apiserver-8555cf7879-zz4dq" Feb 9 19:26:49.758618 kubelet[2267]: I0209 19:26:49.743751 2267 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb-calico-apiserver-certs\") pod \"calico-apiserver-8555cf7879-t999h\" (UID: \"1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb\") " pod="calico-apiserver/calico-apiserver-8555cf7879-t999h" Feb 9 19:26:49.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:49.782463 kernel: audit: type=1300 audit(1707506809.709:318): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd985fa980 a2=0 a3=7ffd985fa96c items=0 ppid=2426 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:49.782578 kernel: audit: type=1327 audit(1707506809.709:318): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:49.799343 kernel: audit: type=1325 audit(1707506809.709:319): table=nat:124 family=2 entries=78 op=nft_register_rule pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:49.709000 audit[4356]: NETFILTER_CFG table=nat:124 family=2 entries=78 op=nft_register_rule pid=4356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:49.709000 audit[4356]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd985fa980 a2=0 a3=7ffd985fa96c items=0 ppid=2426 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:49.836323 kernel: audit: type=1300 audit(1707506809.709:319): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffd985fa980 a2=0 a3=7ffd985fa96c items=0 ppid=2426 pid=4356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:49.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:49.844783 kubelet[2267]: E0209 19:26:49.844754 2267 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 19:26:49.845063 kubelet[2267]: E0209 19:26:49.845044 2267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/204ea4bc-c919-46f9-900b-b2166b33c698-calico-apiserver-certs podName:204ea4bc-c919-46f9-900b-b2166b33c698 nodeName:}" failed. No retries permitted until 2024-02-09 19:26:50.345013347 +0000 UTC m=+59.302691377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/204ea4bc-c919-46f9-900b-b2166b33c698-calico-apiserver-certs") pod "calico-apiserver-8555cf7879-zz4dq" (UID: "204ea4bc-c919-46f9-900b-b2166b33c698") : secret "calico-apiserver-certs" not found Feb 9 19:26:49.845553 kubelet[2267]: E0209 19:26:49.845535 2267 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 19:26:49.845733 kubelet[2267]: E0209 19:26:49.845720 2267 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb-calico-apiserver-certs podName:1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb nodeName:}" failed. No retries permitted until 2024-02-09 19:26:50.345701005 +0000 UTC m=+59.303379041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb-calico-apiserver-certs") pod "calico-apiserver-8555cf7879-t999h" (UID: "1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb") : secret "calico-apiserver-certs" not found Feb 9 19:26:49.837000 audit[4382]: NETFILTER_CFG table=filter:125 family=2 entries=7 op=nft_register_rule pid=4382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:49.873467 kernel: audit: type=1327 audit(1707506809.709:319): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:49.873606 kernel: audit: type=1325 audit(1707506809.837:320): table=filter:125 family=2 entries=7 op=nft_register_rule pid=4382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:49.837000 audit[4382]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe83014550 a2=0 a3=7ffe8301453c items=0 ppid=2426 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:49.951316 kernel: audit: type=1300 audit(1707506809.837:320): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe83014550 a2=0 a3=7ffe8301453c items=0 ppid=2426 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:49.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:49.842000 audit[4382]: NETFILTER_CFG table=nat:126 family=2 entries=78 op=nft_register_rule pid=4382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:49.983598 kernel: audit: type=1327 audit(1707506809.837:320): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:49.983720 kernel: audit: type=1325 audit(1707506809.842:321): table=nat:126 family=2 entries=78 op=nft_register_rule pid=4382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:49.842000 audit[4382]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe83014550 a2=0 a3=7ffe8301453c items=0 ppid=2426 pid=4382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:49.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:50.470092 env[1221]: time="2024-02-09T19:26:50.470032129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8555cf7879-zz4dq,Uid:204ea4bc-c919-46f9-900b-b2166b33c698,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:26:50.485129 env[1221]: time="2024-02-09T19:26:50.485059923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8555cf7879-t999h,Uid:1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:26:50.728547 systemd-networkd[1083]: cali8118756c8a0: Link UP Feb 9 19:26:50.737553 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:26:50.746090 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8118756c8a0: link becomes ready Feb 9 19:26:50.749931 systemd-networkd[1083]: cali8118756c8a0: Gained carrier Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.583 [INFO][4390] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0 calico-apiserver-8555cf7879- calico-apiserver 204ea4bc-c919-46f9-900b-b2166b33c698 825 0 2024-02-09 19:26:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8555cf7879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal calico-apiserver-8555cf7879-zz4dq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8118756c8a0 [] []}} ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-zz4dq" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.584 [INFO][4390] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-zz4dq" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.646 [INFO][4411] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" HandleID="k8s-pod-network.12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.668 [INFO][4411] ipam_plugin.go 268: Auto assigning IP ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" HandleID="k8s-pod-network.12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bedc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", "pod":"calico-apiserver-8555cf7879-zz4dq", "timestamp":"2024-02-09 19:26:50.646928738 +0000 UTC"}, Hostname:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.669 [INFO][4411] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.669 [INFO][4411] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.669 [INFO][4411] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal' Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.674 [INFO][4411] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.679 [INFO][4411] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.684 [INFO][4411] ipam.go 489: Trying affinity for 192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.686 [INFO][4411] ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.696 [INFO][4411] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.696 [INFO][4411] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.700 [INFO][4411] ipam.go 1682: Creating new handle: k8s-pod-network.12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69 Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.707 [INFO][4411] ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.716 [INFO][4411] ipam.go 1216: Successfully claimed IPs: [192.168.113.133/26] block=192.168.113.128/26 handle="k8s-pod-network.12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.716 [INFO][4411] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.133/26] handle="k8s-pod-network.12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.716 [INFO][4411] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:50.783685 env[1221]: 2024-02-09 19:26:50.716 [INFO][4411] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.113.133/26] IPv6=[] ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" HandleID="k8s-pod-network.12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" Feb 9 19:26:50.784861 env[1221]: 2024-02-09 19:26:50.719 [INFO][4390] k8s.go 385: Populated endpoint ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-zz4dq" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0", GenerateName:"calico-apiserver-8555cf7879-", Namespace:"calico-apiserver", SelfLink:"", UID:"204ea4bc-c919-46f9-900b-b2166b33c698", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8555cf7879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-8555cf7879-zz4dq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8118756c8a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:50.784861 env[1221]: 2024-02-09 19:26:50.719 [INFO][4390] k8s.go 386: Calico CNI using IPs: [192.168.113.133/32] ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-zz4dq" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" Feb 9 19:26:50.784861 env[1221]: 2024-02-09 19:26:50.719 [INFO][4390] dataplane_linux.go 68: Setting the host side veth name to cali8118756c8a0 ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-zz4dq" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" Feb 9 19:26:50.784861 env[1221]: 2024-02-09 19:26:50.750 [INFO][4390] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-zz4dq" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" Feb 9 19:26:50.784861 env[1221]: 2024-02-09 19:26:50.752 [INFO][4390] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-zz4dq" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0", GenerateName:"calico-apiserver-8555cf7879-", Namespace:"calico-apiserver", SelfLink:"", UID:"204ea4bc-c919-46f9-900b-b2166b33c698", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8555cf7879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69", Pod:"calico-apiserver-8555cf7879-zz4dq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8118756c8a0", MAC:"02:42:97:85:6f:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:50.784861 env[1221]: 2024-02-09 19:26:50.774 [INFO][4390] k8s.go 491: Wrote updated endpoint to datastore ContainerID="12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-zz4dq" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--zz4dq-eth0" Feb 9 19:26:50.828377 systemd-networkd[1083]: cali5a082fb0fad: Link UP Feb 9 19:26:50.856365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5a082fb0fad: link becomes ready Feb 9 19:26:50.857350 env[1221]: time="2024-02-09T19:26:50.857218578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:50.861779 systemd-networkd[1083]: cali5a082fb0fad: Gained carrier Feb 9 19:26:50.863132 env[1221]: time="2024-02-09T19:26:50.857551547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:50.863572 env[1221]: time="2024-02-09T19:26:50.863514172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:50.864236 env[1221]: time="2024-02-09T19:26:50.864168931Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69 pid=4451 runtime=io.containerd.runc.v2 Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.617 [INFO][4400] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0 calico-apiserver-8555cf7879- calico-apiserver 1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb 828 0 2024-02-09 19:26:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8555cf7879 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal calico-apiserver-8555cf7879-t999h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a082fb0fad [] []}} ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-t999h" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.617 [INFO][4400] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-t999h" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.690 [INFO][4418] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" HandleID="k8s-pod-network.d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.733 [INFO][4418] ipam_plugin.go 268: Auto assigning IP ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" HandleID="k8s-pod-network.d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", "pod":"calico-apiserver-8555cf7879-t999h", "timestamp":"2024-02-09 19:26:50.690223759 +0000 UTC"}, Hostname:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.734 [INFO][4418] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.734 [INFO][4418] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.734 [INFO][4418] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal' Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.747 [INFO][4418] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.784 [INFO][4418] ipam.go 372: Looking up existing affinities for host host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.795 [INFO][4418] ipam.go 489: Trying affinity for 192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.797 [INFO][4418] ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.801 [INFO][4418] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.802 [INFO][4418] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.804 [INFO][4418] ipam.go 1682: Creating new handle: k8s-pod-network.d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3 Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.809 [INFO][4418] ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.817 [INFO][4418] ipam.go 1216: Successfully claimed IPs: [192.168.113.134/26] block=192.168.113.128/26 handle="k8s-pod-network.d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.818 [INFO][4418] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.134/26] handle="k8s-pod-network.d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" host="ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal" Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.818 [INFO][4418] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:50.866713 env[1221]: 2024-02-09 19:26:50.818 [INFO][4418] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.113.134/26] IPv6=[] ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" HandleID="k8s-pod-network.d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" Feb 9 19:26:50.867837 env[1221]: 2024-02-09 19:26:50.820 [INFO][4400] k8s.go 385: Populated endpoint ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-t999h" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0", GenerateName:"calico-apiserver-8555cf7879-", Namespace:"calico-apiserver", SelfLink:"", UID:"1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8555cf7879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-8555cf7879-t999h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a082fb0fad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:50.867837 env[1221]: 2024-02-09 19:26:50.820 [INFO][4400] k8s.go 386: Calico CNI using IPs: [192.168.113.134/32] ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-t999h" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" Feb 9 19:26:50.867837 env[1221]: 2024-02-09 19:26:50.820 [INFO][4400] dataplane_linux.go 68: Setting the host side veth name to cali5a082fb0fad ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-t999h" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" Feb 9 19:26:50.867837 env[1221]: 2024-02-09 19:26:50.838 [INFO][4400] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-t999h" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" Feb 9 19:26:50.867837 env[1221]: 2024-02-09 19:26:50.838 [INFO][4400] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-t999h" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0", GenerateName:"calico-apiserver-8555cf7879-", Namespace:"calico-apiserver", SelfLink:"", UID:"1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8555cf7879", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3", Pod:"calico-apiserver-8555cf7879-t999h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a082fb0fad", MAC:"8a:33:96:5a:61:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:50.867837 env[1221]: 2024-02-09 19:26:50.863 [INFO][4400] k8s.go 491: Wrote updated endpoint to datastore ContainerID="d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3" Namespace="calico-apiserver" Pod="calico-apiserver-8555cf7879-t999h" WorkloadEndpoint="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--apiserver--8555cf7879--t999h-eth0" Feb 9 19:26:50.881000 audit[4471]: NETFILTER_CFG table=filter:127 family=2 entries=55 op=nft_register_chain pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:50.881000 audit[4471]: SYSCALL arch=c000003e syscall=46 success=yes exit=28088 a0=3 a1=7ffe0ff8ec10 a2=0 a3=7ffe0ff8ebfc items=0 ppid=3454 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:50.881000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:50.952000 audit[4482]: NETFILTER_CFG table=filter:128 family=2 entries=46 op=nft_register_chain pid=4482 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:26:50.953244 systemd[1]: run-containerd-runc-k8s.io-12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69-runc.evNlce.mount: Deactivated successfully. Feb 9 19:26:50.952000 audit[4482]: SYSCALL arch=c000003e syscall=46 success=yes exit=23292 a0=3 a1=7ffddfc0da90 a2=0 a3=7ffddfc0da7c items=0 ppid=3454 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:50.952000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:26:50.972272 env[1221]: time="2024-02-09T19:26:50.972000918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:26:50.972272 env[1221]: time="2024-02-09T19:26:50.972063159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:26:50.972272 env[1221]: time="2024-02-09T19:26:50.972083323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:26:50.972596 env[1221]: time="2024-02-09T19:26:50.972378324Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3 pid=4495 runtime=io.containerd.runc.v2 Feb 9 19:26:51.087742 env[1221]: time="2024-02-09T19:26:51.087682348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8555cf7879-zz4dq,Uid:204ea4bc-c919-46f9-900b-b2166b33c698,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69\"" Feb 9 19:26:51.091665 env[1221]: time="2024-02-09T19:26:51.091621394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:26:51.118809 env[1221]: time="2024-02-09T19:26:51.118754501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8555cf7879-t999h,Uid:1eb5b4d1-96b6-4cb3-a3ff-30fc748fa1eb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3\"" Feb 9 19:26:51.412655 env[1221]: time="2024-02-09T19:26:51.412173479Z" level=info msg="StopPodSandbox for \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\"" Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.459 [WARNING][4551] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"fb8e00fa-f6f2-47c4-a525-9d0f529ec36c", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88", Pod:"coredns-787d4945fb-kzz8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif589e4c247a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.460 [INFO][4551] k8s.go 578: Cleaning up netns ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.460 [INFO][4551] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" iface="eth0" netns="" Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.460 [INFO][4551] k8s.go 585: Releasing IP address(es) ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.460 [INFO][4551] utils.go 188: Calico CNI releasing IP address ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.486 [INFO][4557] ipam_plugin.go 415: Releasing address using handleID ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" HandleID="k8s-pod-network.604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.486 [INFO][4557] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.486 [INFO][4557] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.494 [WARNING][4557] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" HandleID="k8s-pod-network.604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.494 [INFO][4557] ipam_plugin.go 443: Releasing address using workloadID ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" HandleID="k8s-pod-network.604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.496 [INFO][4557] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:51.500401 env[1221]: 2024-02-09 19:26:51.498 [INFO][4551] k8s.go 591: Teardown processing complete. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:51.501490 env[1221]: time="2024-02-09T19:26:51.500445354Z" level=info msg="TearDown network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\" successfully" Feb 9 19:26:51.501490 env[1221]: time="2024-02-09T19:26:51.500485030Z" level=info msg="StopPodSandbox for \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\" returns successfully" Feb 9 19:26:51.502185 env[1221]: time="2024-02-09T19:26:51.502143728Z" level=info msg="RemovePodSandbox for \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\"" Feb 9 19:26:51.502356 env[1221]: time="2024-02-09T19:26:51.502194246Z" level=info msg="Forcibly stopping sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\"" Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.543 [WARNING][4575] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"fb8e00fa-f6f2-47c4-a525-9d0f529ec36c", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"f5768191cdc151ee397b24f597d8c90e6fb196884e6bb284f75a4703569c6b88", Pod:"coredns-787d4945fb-kzz8v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif589e4c247a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.543 [INFO][4575] k8s.go 578: Cleaning up netns ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.543 [INFO][4575] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" iface="eth0" netns="" Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.543 [INFO][4575] k8s.go 585: Releasing IP address(es) ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.543 [INFO][4575] utils.go 188: Calico CNI releasing IP address ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.578 [INFO][4581] ipam_plugin.go 415: Releasing address using handleID ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" HandleID="k8s-pod-network.604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.578 [INFO][4581] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.578 [INFO][4581] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.586 [WARNING][4581] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" HandleID="k8s-pod-network.604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.586 [INFO][4581] ipam_plugin.go 443: Releasing address using workloadID ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" HandleID="k8s-pod-network.604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--kzz8v-eth0" Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.588 [INFO][4581] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:51.591617 env[1221]: 2024-02-09 19:26:51.590 [INFO][4575] k8s.go 591: Teardown processing complete. ContainerID="604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6" Feb 9 19:26:51.592507 env[1221]: time="2024-02-09T19:26:51.591668691Z" level=info msg="TearDown network for sandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\" successfully" Feb 9 19:26:51.599221 env[1221]: time="2024-02-09T19:26:51.599173663Z" level=info msg="RemovePodSandbox \"604d6ef780e4c0fc76527ef78ed93b9bba0db934940443a7a5dbee0758accee6\" returns successfully" Feb 9 19:26:51.599922 env[1221]: time="2024-02-09T19:26:51.599883403Z" level=info msg="StopPodSandbox for \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\"" Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.650 [WARNING][4599] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0", GenerateName:"calico-kube-controllers-76f544f57c-", Namespace:"calico-system", SelfLink:"", UID:"5b12e0e6-a8af-4fcb-82b3-77090ee9a472", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76f544f57c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c", Pod:"calico-kube-controllers-76f544f57c-zfhh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34377f70188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.651 [INFO][4599] k8s.go 578: Cleaning up netns ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.651 [INFO][4599] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" iface="eth0" netns="" Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.651 [INFO][4599] k8s.go 585: Releasing IP address(es) ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.651 [INFO][4599] utils.go 188: Calico CNI releasing IP address ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.677 [INFO][4607] ipam_plugin.go 415: Releasing address using handleID ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" HandleID="k8s-pod-network.ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.677 [INFO][4607] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.678 [INFO][4607] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.687 [WARNING][4607] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" HandleID="k8s-pod-network.ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.687 [INFO][4607] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" HandleID="k8s-pod-network.ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.689 [INFO][4607] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:51.694456 env[1221]: 2024-02-09 19:26:51.690 [INFO][4599] k8s.go 591: Teardown processing complete. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:51.696134 env[1221]: time="2024-02-09T19:26:51.696067446Z" level=info msg="TearDown network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\" successfully" Feb 9 19:26:51.696357 env[1221]: time="2024-02-09T19:26:51.696270457Z" level=info msg="StopPodSandbox for \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\" returns successfully" Feb 9 19:26:51.697056 env[1221]: time="2024-02-09T19:26:51.697021811Z" level=info msg="RemovePodSandbox for \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\"" Feb 9 19:26:51.697302 env[1221]: time="2024-02-09T19:26:51.697218204Z" level=info msg="Forcibly stopping sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\"" Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.749 [WARNING][4626] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0", GenerateName:"calico-kube-controllers-76f544f57c-", Namespace:"calico-system", SelfLink:"", UID:"5b12e0e6-a8af-4fcb-82b3-77090ee9a472", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76f544f57c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"69555bffaf2ab822f626209636506a67c77d73afe9016ac8b21e132bc987184c", Pod:"calico-kube-controllers-76f544f57c-zfhh4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali34377f70188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.749 [INFO][4626] k8s.go 578: Cleaning up netns ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.749 [INFO][4626] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" iface="eth0" netns="" Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.749 [INFO][4626] k8s.go 585: Releasing IP address(es) ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.749 [INFO][4626] utils.go 188: Calico CNI releasing IP address ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.780 [INFO][4633] ipam_plugin.go 415: Releasing address using handleID ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" HandleID="k8s-pod-network.ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.780 [INFO][4633] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.780 [INFO][4633] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.788 [WARNING][4633] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" HandleID="k8s-pod-network.ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.788 [INFO][4633] ipam_plugin.go 443: Releasing address using workloadID ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" HandleID="k8s-pod-network.ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-calico--kube--controllers--76f544f57c--zfhh4-eth0" Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.789 [INFO][4633] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:51.792480 env[1221]: 2024-02-09 19:26:51.791 [INFO][4626] k8s.go 591: Teardown processing complete. ContainerID="ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f" Feb 9 19:26:51.793342 env[1221]: time="2024-02-09T19:26:51.792496278Z" level=info msg="TearDown network for sandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\" successfully" Feb 9 19:26:51.801140 env[1221]: time="2024-02-09T19:26:51.801015409Z" level=info msg="RemovePodSandbox \"ac0713352c8fb08eda81ea5a91525ae36fa0aa27718e13d9c7c90dad702c4b8f\" returns successfully" Feb 9 19:26:51.801643 env[1221]: time="2024-02-09T19:26:51.801596004Z" level=info msg="StopPodSandbox for \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\"" Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.847 [WARNING][4653] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4296df02-d23c-4462-abee-22483f63c36c", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520", Pod:"csi-node-driver-4qwfs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3b1764cfcb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.847 [INFO][4653] k8s.go 578: Cleaning up netns ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.847 [INFO][4653] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" iface="eth0" netns="" Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.847 [INFO][4653] k8s.go 585: Releasing IP address(es) ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.847 [INFO][4653] utils.go 188: Calico CNI releasing IP address ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.872 [INFO][4659] ipam_plugin.go 415: Releasing address using handleID ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" HandleID="k8s-pod-network.82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.872 [INFO][4659] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.872 [INFO][4659] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.885 [WARNING][4659] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" HandleID="k8s-pod-network.82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.885 [INFO][4659] ipam_plugin.go 443: Releasing address using workloadID ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" HandleID="k8s-pod-network.82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.886 [INFO][4659] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:51.889081 env[1221]: 2024-02-09 19:26:51.887 [INFO][4653] k8s.go 591: Teardown processing complete. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:51.889965 env[1221]: time="2024-02-09T19:26:51.889141373Z" level=info msg="TearDown network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\" successfully" Feb 9 19:26:51.889965 env[1221]: time="2024-02-09T19:26:51.889182240Z" level=info msg="StopPodSandbox for \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\" returns successfully" Feb 9 19:26:51.889965 env[1221]: time="2024-02-09T19:26:51.889814349Z" level=info msg="RemovePodSandbox for \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\"" Feb 9 19:26:51.889965 env[1221]: time="2024-02-09T19:26:51.889859415Z" level=info msg="Forcibly stopping sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\"" Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.932 [WARNING][4677] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4296df02-d23c-4462-abee-22483f63c36c", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"e79b4b795f3e12bca8f081091d37f2d513096278c4a9589f03e50567e8d14520", Pod:"csi-node-driver-4qwfs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3b1764cfcb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.932 [INFO][4677] k8s.go 578: Cleaning up netns ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.932 [INFO][4677] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" iface="eth0" netns="" Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.933 [INFO][4677] k8s.go 585: Releasing IP address(es) ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.933 [INFO][4677] utils.go 188: Calico CNI releasing IP address ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.969 [INFO][4683] ipam_plugin.go 415: Releasing address using handleID ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" HandleID="k8s-pod-network.82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.969 [INFO][4683] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.970 [INFO][4683] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.991 [WARNING][4683] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" HandleID="k8s-pod-network.82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.992 [INFO][4683] ipam_plugin.go 443: Releasing address using workloadID ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" HandleID="k8s-pod-network.82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-csi--node--driver--4qwfs-eth0" Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.993 [INFO][4683] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:51.998680 env[1221]: 2024-02-09 19:26:51.995 [INFO][4677] k8s.go 591: Teardown processing complete. ContainerID="82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b" Feb 9 19:26:52.013923 env[1221]: time="2024-02-09T19:26:51.998630267Z" level=info msg="TearDown network for sandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\" successfully" Feb 9 19:26:52.023126 env[1221]: time="2024-02-09T19:26:52.023071660Z" level=info msg="RemovePodSandbox \"82c3fdd9fb92f62ab666f08d4416222212f8019e223a9f4bfec2537b1223551b\" returns successfully" Feb 9 19:26:52.023683 env[1221]: time="2024-02-09T19:26:52.023639866Z" level=info msg="StopPodSandbox for \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\"" Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.079 [WARNING][4701] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f2e3bd37-daf7-4099-845b-c1f627963705", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c", Pod:"coredns-787d4945fb-fcq6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaff8286279a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.079 [INFO][4701] k8s.go 578: Cleaning up netns ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.080 [INFO][4701] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" iface="eth0" netns="" Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.080 [INFO][4701] k8s.go 585: Releasing IP address(es) ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.080 [INFO][4701] utils.go 188: Calico CNI releasing IP address ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.113 [INFO][4707] ipam_plugin.go 415: Releasing address using handleID ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" HandleID="k8s-pod-network.df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.113 [INFO][4707] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.113 [INFO][4707] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.122 [WARNING][4707] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" HandleID="k8s-pod-network.df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.122 [INFO][4707] ipam_plugin.go 443: Releasing address using workloadID ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" HandleID="k8s-pod-network.df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.124 [INFO][4707] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:52.126910 env[1221]: 2024-02-09 19:26:52.125 [INFO][4701] k8s.go 591: Teardown processing complete. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:52.127663 env[1221]: time="2024-02-09T19:26:52.127624098Z" level=info msg="TearDown network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\" successfully" Feb 9 19:26:52.127773 env[1221]: time="2024-02-09T19:26:52.127751573Z" level=info msg="StopPodSandbox for \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\" returns successfully" Feb 9 19:26:52.128543 env[1221]: time="2024-02-09T19:26:52.128505940Z" level=info msg="RemovePodSandbox for \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\"" Feb 9 19:26:52.128661 env[1221]: time="2024-02-09T19:26:52.128553403Z" level=info msg="Forcibly stopping sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\"" Feb 9 19:26:52.153481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037630209.mount: Deactivated successfully. Feb 9 19:26:52.200273 systemd-networkd[1083]: cali5a082fb0fad: Gained IPv6LL Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.197 [WARNING][4727] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"f2e3bd37-daf7-4099-845b-c1f627963705", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 26, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-2-c23b420fd1c4e436d83b.c.flatcar-212911.internal", ContainerID:"c1a1e53f690a4fe97bca6d3309bb32ea91d5bd199f4d433b17ee307be768770c", Pod:"coredns-787d4945fb-fcq6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaff8286279a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.197 [INFO][4727] k8s.go 578: Cleaning up netns ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.197 [INFO][4727] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" iface="eth0" netns="" Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.197 [INFO][4727] k8s.go 585: Releasing IP address(es) ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.197 [INFO][4727] utils.go 188: Calico CNI releasing IP address ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.244 [INFO][4733] ipam_plugin.go 415: Releasing address using handleID ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" HandleID="k8s-pod-network.df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.244 [INFO][4733] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.245 [INFO][4733] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.253 [WARNING][4733] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" HandleID="k8s-pod-network.df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.253 [INFO][4733] ipam_plugin.go 443: Releasing address using workloadID ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" HandleID="k8s-pod-network.df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Workload="ci--3510--3--2--c23b420fd1c4e436d83b.c.flatcar--212911.internal-k8s-coredns--787d4945fb--fcq6k-eth0" Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.255 [INFO][4733] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:26:52.258951 env[1221]: 2024-02-09 19:26:52.257 [INFO][4727] k8s.go 591: Teardown processing complete. ContainerID="df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178" Feb 9 19:26:52.258951 env[1221]: time="2024-02-09T19:26:52.258852742Z" level=info msg="TearDown network for sandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\" successfully" Feb 9 19:26:52.265748 env[1221]: time="2024-02-09T19:26:52.265690667Z" level=info msg="RemovePodSandbox \"df22bf1710aaa82240a901dc3961927193813a5e7fac1b94c43e1f07fb813178\" returns successfully" Feb 9 19:26:52.775550 systemd-networkd[1083]: cali8118756c8a0: Gained IPv6LL Feb 9 19:26:53.751049 systemd[1]: run-containerd-runc-k8s.io-510b05576ebee6d96d8bdf2244d4f3074861e04e769fd094f51354ca5fa14d1a-runc.8drRVk.mount: Deactivated successfully. Feb 9 19:26:54.312327 env[1221]: time="2024-02-09T19:26:54.312247956Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:54.315741 env[1221]: time="2024-02-09T19:26:54.315688469Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:54.319162 env[1221]: time="2024-02-09T19:26:54.319114872Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:54.322269 env[1221]: time="2024-02-09T19:26:54.322216681Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:54.323850 env[1221]: time="2024-02-09T19:26:54.323792812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:26:54.327241 env[1221]: time="2024-02-09T19:26:54.325857289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:26:54.329253 env[1221]: time="2024-02-09T19:26:54.329210883Z" level=info msg="CreateContainer within sandbox \"12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:26:54.355660 env[1221]: time="2024-02-09T19:26:54.355603339Z" level=info msg="CreateContainer within sandbox \"12ceb74ce68855f630b4a01254df0070b50308e4b5029c84176363ee6075fa69\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"254da3db45a268d5dd460815b924fe873b62b37f9da777b98b5127ae531416ea\"" Feb 9 19:26:54.359478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111400754.mount: Deactivated successfully. Feb 9 19:26:54.361190 env[1221]: time="2024-02-09T19:26:54.361139333Z" level=info msg="StartContainer for \"254da3db45a268d5dd460815b924fe873b62b37f9da777b98b5127ae531416ea\"" Feb 9 19:26:54.477330 env[1221]: time="2024-02-09T19:26:54.475218814Z" level=info msg="StartContainer for \"254da3db45a268d5dd460815b924fe873b62b37f9da777b98b5127ae531416ea\" returns successfully" Feb 9 19:26:54.680619 env[1221]: time="2024-02-09T19:26:54.680563846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:54.685325 env[1221]: time="2024-02-09T19:26:54.683331203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:54.693586 env[1221]: time="2024-02-09T19:26:54.693540228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:54.696404 env[1221]: time="2024-02-09T19:26:54.696362667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:26:54.697974 env[1221]: time="2024-02-09T19:26:54.697925353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:26:54.702915 env[1221]: time="2024-02-09T19:26:54.702873207Z" level=info msg="CreateContainer within sandbox \"d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:26:54.721853 env[1221]: time="2024-02-09T19:26:54.721800479Z" level=info msg="CreateContainer within sandbox \"d4f7ea06b873a2520cb4dfce02a275c8b221641565f3a0b1ae4cd513df1493f3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca306ec4862351cd0c0ba4a764a7b15afd94ce553b02da727af3dbf3932ff2da\"" Feb 9 19:26:54.722812 env[1221]: time="2024-02-09T19:26:54.722780494Z" level=info msg="StartContainer for \"ca306ec4862351cd0c0ba4a764a7b15afd94ce553b02da727af3dbf3932ff2da\"" Feb 9 19:26:54.787924 systemd[1]: run-containerd-runc-k8s.io-ca306ec4862351cd0c0ba4a764a7b15afd94ce553b02da727af3dbf3932ff2da-runc.9w5OxB.mount: Deactivated successfully. Feb 9 19:26:54.923822 env[1221]: time="2024-02-09T19:26:54.923762632Z" level=info msg="StartContainer for \"ca306ec4862351cd0c0ba4a764a7b15afd94ce553b02da727af3dbf3932ff2da\" returns successfully" Feb 9 19:26:55.095711 kubelet[2267]: I0209 19:26:55.095578 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8555cf7879-t999h" podStartSLOduration=-9.223372030759249e+09 pod.CreationTimestamp="2024-02-09 19:26:49 +0000 UTC" firstStartedPulling="2024-02-09 19:26:51.120903577 +0000 UTC m=+60.078581591" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:55.09431152 +0000 UTC m=+64.051989552" watchObservedRunningTime="2024-02-09 19:26:55.095526587 +0000 UTC m=+64.053204627" Feb 9 19:26:55.137362 kubelet[2267]: I0209 19:26:55.137322 2267 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8555cf7879-zz4dq" podStartSLOduration=-9.223372030717524e+09 pod.CreationTimestamp="2024-02-09 19:26:49 +0000 UTC" firstStartedPulling="2024-02-09 19:26:51.089254939 +0000 UTC m=+60.046932969" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:26:55.135802681 +0000 UTC m=+64.093480719" watchObservedRunningTime="2024-02-09 19:26:55.137251658 +0000 UTC m=+64.094929695" Feb 9 19:26:55.368675 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 19:26:55.368866 kernel: audit: type=1325 audit(1707506815.345:324): table=filter:129 family=2 entries=8 op=nft_register_rule pid=4862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:55.345000 audit[4862]: NETFILTER_CFG table=filter:129 family=2 entries=8 op=nft_register_rule pid=4862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:55.345000 audit[4862]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe256d1120 a2=0 a3=7ffe256d110c items=0 ppid=2426 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:55.418332 kernel: audit: type=1300 audit(1707506815.345:324): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe256d1120 a2=0 a3=7ffe256d110c items=0 ppid=2426 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:55.345000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:55.382000 audit[4862]: NETFILTER_CFG table=nat:130 family=2 entries=78 op=nft_register_rule pid=4862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:55.466751 kernel: audit: type=1327 audit(1707506815.345:324): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:55.466879 kernel: audit: type=1325 audit(1707506815.382:325): table=nat:130 family=2 entries=78 op=nft_register_rule pid=4862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:55.382000 audit[4862]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe256d1120 a2=0 a3=7ffe256d110c items=0 ppid=2426 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:55.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:55.529413 kernel: audit: type=1300 audit(1707506815.382:325): arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe256d1120 a2=0 a3=7ffe256d110c items=0 ppid=2426 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:55.529581 kernel: audit: type=1327 audit(1707506815.382:325): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:55.617000 audit[4888]: NETFILTER_CFG table=filter:131 family=2 entries=8 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:55.636418 kernel: audit: type=1325 audit(1707506815.617:326): table=filter:131 family=2 entries=8 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:55.680421 kernel: audit: type=1300 audit(1707506815.617:326): arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffece5312a0 a2=0 a3=7ffece53128c items=0 ppid=2426 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:55.617000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffece5312a0 a2=0 a3=7ffece53128c items=0 ppid=2426 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:55.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:55.717310 kernel: audit: type=1327 audit(1707506815.617:326): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:55.717448 kernel: audit: type=1325 audit(1707506815.617:327): table=nat:132 family=2 entries=78 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:55.617000 audit[4888]: NETFILTER_CFG table=nat:132 family=2 entries=78 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:26:55.617000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffece5312a0 a2=0 a3=7ffece53128c items=0 ppid=2426 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:55.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:26:59.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.66:22-147.75.109.163:33908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:26:59.682744 systemd[1]: Started sshd@9-10.128.0.66:22-147.75.109.163:33908.service. Feb 9 19:26:59.969000 audit[4896]: USER_ACCT pid=4896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:26:59.970000 audit[4896]: CRED_ACQ pid=4896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:26:59.970000 audit[4896]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc91e99be0 a2=3 a3=0 items=0 ppid=1 pid=4896 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:26:59.970000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:26:59.972662 sshd[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:26:59.974154 sshd[4896]: Accepted publickey for core from 147.75.109.163 port 33908 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:26:59.980434 systemd[1]: Started session-10.scope. Feb 9 19:26:59.980761 systemd-logind[1201]: New session 10 of user core. Feb 9 19:26:59.989000 audit[4896]: USER_START pid=4896 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:26:59.991000 audit[4899]: CRED_ACQ pid=4899 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:00.620933 sshd[4896]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:00.631772 kernel: kauditd_printk_skb: 10 callbacks suppressed Feb 9 19:27:00.631900 kernel: audit: type=1106 audit(1707506820.621:334): pid=4896 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:00.621000 audit[4896]: USER_END pid=4896 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:00.629541 systemd[1]: sshd@9-10.128.0.66:22-147.75.109.163:33908.service: Deactivated successfully. Feb 9 19:27:00.630933 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:27:00.633576 systemd-logind[1201]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:27:00.635104 systemd-logind[1201]: Removed session 10. Feb 9 19:27:00.621000 audit[4896]: CRED_DISP pid=4896 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:00.685745 kernel: audit: type=1104 audit(1707506820.621:335): pid=4896 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:00.685917 kernel: audit: type=1131 audit(1707506820.627:336): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.66:22-147.75.109.163:33908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:00.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.66:22-147.75.109.163:33908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:05.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.66:22-147.75.109.163:52222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:05.665730 systemd[1]: Started sshd@10-10.128.0.66:22-147.75.109.163:52222.service. Feb 9 19:27:05.691371 kernel: audit: type=1130 audit(1707506825.664:337): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.66:22-147.75.109.163:52222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:05.988480 kernel: audit: type=1101 audit(1707506825.956:338): pid=4914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:05.956000 audit[4914]: USER_ACCT pid=4914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:05.989235 sshd[4914]: Accepted publickey for core from 147.75.109.163 port 52222 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:05.989518 sshd[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:05.987000 audit[4914]: CRED_ACQ pid=4914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:06.000813 systemd[1]: Started session-11.scope. Feb 9 19:27:06.001674 systemd-logind[1201]: New session 11 of user core. Feb 9 19:27:06.016316 kernel: audit: type=1103 audit(1707506825.987:339): pid=4914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:05.987000 audit[4914]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe42cebcb0 a2=3 a3=0 items=0 ppid=1 pid=4914 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:06.063817 kernel: audit: type=1006 audit(1707506825.987:340): pid=4914 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 19:27:06.063991 kernel: audit: type=1300 audit(1707506825.987:340): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe42cebcb0 a2=3 a3=0 items=0 ppid=1 pid=4914 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:06.064051 kernel: audit: type=1327 audit(1707506825.987:340): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:05.987000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:06.009000 audit[4914]: USER_START pid=4914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:06.105925 kernel: audit: type=1105 audit(1707506826.009:341): pid=4914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:06.106481 kernel: audit: type=1103 audit(1707506826.017:342): pid=4917 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:06.017000 audit[4917]: CRED_ACQ pid=4917 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:06.315978 sshd[4914]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:06.317000 audit[4914]: USER_END pid=4914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:06.328170 systemd[1]: sshd@10-10.128.0.66:22-147.75.109.163:52222.service: Deactivated successfully. Feb 9 19:27:06.330244 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:27:06.331715 systemd-logind[1201]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:27:06.333285 systemd-logind[1201]: Removed session 11. Feb 9 19:27:06.317000 audit[4914]: CRED_DISP pid=4914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:06.375698 kernel: audit: type=1106 audit(1707506826.317:343): pid=4914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:06.375860 kernel: audit: type=1104 audit(1707506826.317:344): pid=4914 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:06.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.66:22-147.75.109.163:52222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:11.377374 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:27:11.377563 kernel: audit: type=1130 audit(1707506831.360:346): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.66:22-147.75.109.163:52224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:11.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.66:22-147.75.109.163:52224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:11.361190 systemd[1]: Started sshd@11-10.128.0.66:22-147.75.109.163:52224.service. Feb 9 19:27:11.724322 kernel: audit: type=1101 audit(1707506831.692:347): pid=4931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:11.692000 audit[4931]: USER_ACCT pid=4931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:11.724644 sshd[4931]: Accepted publickey for core from 147.75.109.163 port 52224 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:11.726281 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:11.724000 audit[4931]: CRED_ACQ pid=4931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:11.738795 systemd[1]: Started session-12.scope. Feb 9 19:27:11.740822 systemd-logind[1201]: New session 12 of user core. Feb 9 19:27:11.753943 kernel: audit: type=1103 audit(1707506831.724:348): pid=4931 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:11.724000 audit[4931]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbd893c30 a2=3 a3=0 items=0 ppid=1 pid=4931 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:11.805120 kernel: audit: type=1006 audit(1707506831.724:349): pid=4931 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 9 19:27:11.815448 kernel: audit: type=1300 audit(1707506831.724:349): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbd893c30 a2=3 a3=0 items=0 ppid=1 pid=4931 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:11.815521 kernel: audit: type=1327 audit(1707506831.724:349): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:11.815561 kernel: audit: type=1105 audit(1707506831.749:350): pid=4931 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:11.724000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:11.749000 audit[4931]: USER_START pid=4931 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:11.757000 audit[4934]: CRED_ACQ pid=4934 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:11.872912 kernel: audit: type=1103 audit(1707506831.757:351): pid=4934 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:12.259104 sshd[4931]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:12.259000 audit[4931]: USER_END pid=4931 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:12.294546 kernel: audit: type=1106 audit(1707506832.259:352): pid=4931 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:12.302662 systemd-logind[1201]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:27:12.303210 systemd[1]: sshd@11-10.128.0.66:22-147.75.109.163:52224.service: Deactivated successfully. Feb 9 19:27:12.304589 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:27:12.306611 systemd-logind[1201]: Removed session 12. Feb 9 19:27:12.275000 audit[4931]: CRED_DISP pid=4931 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:12.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.66:22-147.75.109.163:52224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:12.347332 kernel: audit: type=1104 audit(1707506832.275:353): pid=4931 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:15.611779 systemd[1]: run-containerd-runc-k8s.io-70f0f32a7febb78233af606056990407f65e20b485ef0338d6bc7630fdd27813-runc.sUCUp1.mount: Deactivated successfully. Feb 9 19:27:17.303384 systemd[1]: Started sshd@12-10.128.0.66:22-147.75.109.163:59896.service. Feb 9 19:27:17.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.66:22-147.75.109.163:59896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:17.309586 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:27:17.309671 kernel: audit: type=1130 audit(1707506837.302:355): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.66:22-147.75.109.163:59896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:17.627404 kernel: audit: type=1101 audit(1707506837.595:356): pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.595000 audit[4974]: USER_ACCT pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.627756 sshd[4974]: Accepted publickey for core from 147.75.109.163 port 59896 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:17.628434 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:17.626000 audit[4974]: CRED_ACQ pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.640062 systemd[1]: Started session-13.scope. Feb 9 19:27:17.642263 systemd-logind[1201]: New session 13 of user core. Feb 9 19:27:17.680050 kernel: audit: type=1103 audit(1707506837.626:357): pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.680176 kernel: audit: type=1006 audit(1707506837.626:358): pid=4974 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Feb 9 19:27:17.680237 kernel: audit: type=1300 audit(1707506837.626:358): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed456a0f0 a2=3 a3=0 items=0 ppid=1 pid=4974 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:17.626000 audit[4974]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed456a0f0 a2=3 a3=0 items=0 ppid=1 pid=4974 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:17.626000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:17.708434 kernel: audit: type=1327 audit(1707506837.626:358): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:17.646000 audit[4974]: USER_START pid=4974 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.750363 kernel: audit: type=1105 audit(1707506837.646:359): pid=4974 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.750527 kernel: audit: type=1103 audit(1707506837.656:360): pid=4976 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.656000 audit[4976]: CRED_ACQ pid=4976 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.956807 sshd[4974]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:17.957000 audit[4974]: USER_END pid=4974 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.962001 systemd-logind[1201]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:27:17.964196 systemd[1]: sshd@12-10.128.0.66:22-147.75.109.163:59896.service: Deactivated successfully. Feb 9 19:27:17.965515 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:27:17.967707 systemd-logind[1201]: Removed session 13. Feb 9 19:27:17.992700 kernel: audit: type=1106 audit(1707506837.957:361): pid=4974 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.992861 kernel: audit: type=1104 audit(1707506837.957:362): pid=4974 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.957000 audit[4974]: CRED_DISP pid=4974 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:17.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.66:22-147.75.109.163:59896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:20.526704 systemd[1]: run-containerd-runc-k8s.io-254da3db45a268d5dd460815b924fe873b62b37f9da777b98b5127ae531416ea-runc.BzL6KU.mount: Deactivated successfully. Feb 9 19:27:20.712000 audit[5054]: NETFILTER_CFG table=filter:133 family=2 entries=7 op=nft_register_rule pid=5054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:20.712000 audit[5054]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe6916a000 a2=0 a3=7ffe69169fec items=0 ppid=2426 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:20.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:20.715000 audit[5054]: NETFILTER_CFG table=nat:134 family=2 entries=89 op=nft_register_chain pid=5054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:20.715000 audit[5054]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffe6916a000 a2=0 a3=7ffe69169fec items=0 ppid=2426 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:20.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:20.794000 audit[5080]: NETFILTER_CFG table=filter:135 family=2 entries=6 op=nft_register_rule pid=5080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:20.794000 audit[5080]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd812d0b40 a2=0 a3=7ffd812d0b2c items=0 ppid=2426 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:20.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:20.798000 audit[5080]: NETFILTER_CFG table=nat:136 family=2 entries=94 op=nft_register_rule pid=5080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:20.798000 audit[5080]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffd812d0b40 a2=0 a3=7ffd812d0b2c items=0 ppid=2426 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:20.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:23.000974 systemd[1]: Started sshd@13-10.128.0.66:22-147.75.109.163:59898.service. Feb 9 19:27:23.032460 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 9 19:27:23.032514 kernel: audit: type=1130 audit(1707506843.001:368): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.66:22-147.75.109.163:59898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:23.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.66:22-147.75.109.163:59898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:23.293000 audit[5081]: USER_ACCT pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.294184 sshd[5081]: Accepted publickey for core from 147.75.109.163 port 59898 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:23.324947 kernel: audit: type=1101 audit(1707506843.293:369): pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.325550 sshd[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:23.324000 audit[5081]: CRED_ACQ pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.338143 systemd[1]: Started session-14.scope. Feb 9 19:27:23.340050 systemd-logind[1201]: New session 14 of user core. Feb 9 19:27:23.368165 kernel: audit: type=1103 audit(1707506843.324:370): pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.368332 kernel: audit: type=1006 audit(1707506843.324:371): pid=5081 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 9 19:27:23.324000 audit[5081]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc85fbfda0 a2=3 a3=0 items=0 ppid=1 pid=5081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:23.397897 kernel: audit: type=1300 audit(1707506843.324:371): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc85fbfda0 a2=3 a3=0 items=0 ppid=1 pid=5081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:23.398071 kernel: audit: type=1327 audit(1707506843.324:371): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:23.324000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:23.407809 kernel: audit: type=1105 audit(1707506843.348:372): pid=5081 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.348000 audit[5081]: USER_START pid=5081 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.352000 audit[5085]: CRED_ACQ pid=5085 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.465338 kernel: audit: type=1103 audit(1707506843.352:373): pid=5085 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.638822 sshd[5081]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:23.640000 audit[5081]: USER_END pid=5081 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.674442 kernel: audit: type=1106 audit(1707506843.640:374): pid=5081 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.675456 systemd[1]: sshd@13-10.128.0.66:22-147.75.109.163:59898.service: Deactivated successfully. Feb 9 19:27:23.687365 systemd-logind[1201]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:27:23.641000 audit[5081]: CRED_DISP pid=5081 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.693565 systemd[1]: Started sshd@14-10.128.0.66:22-147.75.109.163:59910.service. Feb 9 19:27:23.694326 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:27:23.717674 kernel: audit: type=1104 audit(1707506843.641:375): pid=5081 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:23.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.66:22-147.75.109.163:59898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:23.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.66:22-147.75.109.163:59910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:23.717463 systemd-logind[1201]: Removed session 14. Feb 9 19:27:23.745920 systemd[1]: run-containerd-runc-k8s.io-510b05576ebee6d96d8bdf2244d4f3074861e04e769fd094f51354ca5fa14d1a-runc.9skDFC.mount: Deactivated successfully. Feb 9 19:27:24.006000 audit[5098]: USER_ACCT pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:24.008813 sshd[5098]: Accepted publickey for core from 147.75.109.163 port 59910 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:24.009000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:24.009000 audit[5098]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc93763f0 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:24.009000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:24.010842 sshd[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:24.019978 systemd[1]: Started session-15.scope. Feb 9 19:27:24.021542 systemd-logind[1201]: New session 15 of user core. Feb 9 19:27:24.031000 audit[5098]: USER_START pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:24.034000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:25.763128 sshd[5098]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:25.764000 audit[5098]: USER_END pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:25.765000 audit[5098]: CRED_DISP pid=5098 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:25.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.66:22-147.75.109.163:59910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:25.767619 systemd[1]: sshd@14-10.128.0.66:22-147.75.109.163:59910.service: Deactivated successfully. Feb 9 19:27:25.770561 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:27:25.771325 systemd-logind[1201]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:27:25.773652 systemd-logind[1201]: Removed session 15. Feb 9 19:27:25.808121 systemd[1]: Started sshd@15-10.128.0.66:22-147.75.109.163:46790.service. Feb 9 19:27:25.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.66:22-147.75.109.163:46790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:26.095000 audit[5127]: USER_ACCT pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:26.096257 sshd[5127]: Accepted publickey for core from 147.75.109.163 port 46790 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:26.097000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:26.097000 audit[5127]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5bc38870 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:26.097000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:26.098860 sshd[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:26.106211 systemd-logind[1201]: New session 16 of user core. Feb 9 19:27:26.107104 systemd[1]: Started session-16.scope. Feb 9 19:27:26.115000 audit[5127]: USER_START pid=5127 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:26.117000 audit[5130]: CRED_ACQ pid=5130 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:26.421563 sshd[5127]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:26.422000 audit[5127]: USER_END pid=5127 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:26.423000 audit[5127]: CRED_DISP pid=5127 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:26.426276 systemd-logind[1201]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:27:26.426880 systemd[1]: sshd@15-10.128.0.66:22-147.75.109.163:46790.service: Deactivated successfully. Feb 9 19:27:26.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.66:22-147.75.109.163:46790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:26.428245 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:27:26.430803 systemd-logind[1201]: Removed session 16. Feb 9 19:27:31.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.66:22-147.75.109.163:46800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:31.464506 systemd[1]: Started sshd@16-10.128.0.66:22-147.75.109.163:46800.service. Feb 9 19:27:31.483680 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 19:27:31.483870 kernel: audit: type=1130 audit(1707506851.464:395): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.66:22-147.75.109.163:46800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:31.785432 kernel: audit: type=1101 audit(1707506851.755:396): pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:31.755000 audit[5148]: USER_ACCT pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:31.786673 sshd[5148]: Accepted publickey for core from 147.75.109.163 port 46800 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:31.787609 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:31.785000 audit[5148]: CRED_ACQ pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:31.795857 systemd[1]: Started session-17.scope. Feb 9 19:27:31.798140 systemd-logind[1201]: New session 17 of user core. Feb 9 19:27:31.815543 kernel: audit: type=1103 audit(1707506851.785:397): pid=5148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:31.836373 kernel: audit: type=1006 audit(1707506851.785:398): pid=5148 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 19:27:31.836598 kernel: audit: type=1300 audit(1707506851.785:398): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9f46f420 a2=3 a3=0 items=0 ppid=1 pid=5148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:31.785000 audit[5148]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9f46f420 a2=3 a3=0 items=0 ppid=1 pid=5148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:31.860333 kernel: audit: type=1327 audit(1707506851.785:398): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:31.785000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:31.818000 audit[5148]: USER_START pid=5148 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:31.902823 kernel: audit: type=1105 audit(1707506851.818:399): pid=5148 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:31.903355 kernel: audit: type=1103 audit(1707506851.821:400): pid=5151 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:31.821000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:31.960591 systemd[1]: run-containerd-runc-k8s.io-510b05576ebee6d96d8bdf2244d4f3074861e04e769fd094f51354ca5fa14d1a-runc.V038Os.mount: Deactivated successfully. Feb 9 19:27:32.113705 sshd[5148]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:32.116000 audit[5148]: USER_END pid=5148 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.119477 systemd[1]: sshd@16-10.128.0.66:22-147.75.109.163:46800.service: Deactivated successfully. Feb 9 19:27:32.120769 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:27:32.130925 systemd-logind[1201]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:27:32.132582 systemd-logind[1201]: Removed session 17. Feb 9 19:27:32.149340 kernel: audit: type=1106 audit(1707506852.116:401): pid=5148 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.149454 kernel: audit: type=1104 audit(1707506852.116:402): pid=5148 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.116000 audit[5148]: CRED_DISP pid=5148 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.66:22-147.75.109.163:46800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:32.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.66:22-147.75.109.163:46802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:32.178616 systemd[1]: Started sshd@17-10.128.0.66:22-147.75.109.163:46802.service. Feb 9 19:27:32.473000 audit[5179]: USER_ACCT pid=5179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.474318 sshd[5179]: Accepted publickey for core from 147.75.109.163 port 46802 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:32.475000 audit[5179]: CRED_ACQ pid=5179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.475000 audit[5179]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4a1fb160 a2=3 a3=0 items=0 ppid=1 pid=5179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:32.475000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:32.476187 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:32.482363 systemd-logind[1201]: New session 18 of user core. Feb 9 19:27:32.483881 systemd[1]: Started session-18.scope. Feb 9 19:27:32.496000 audit[5179]: USER_START pid=5179 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.498000 audit[5182]: CRED_ACQ pid=5182 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.827351 sshd[5179]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:32.829000 audit[5179]: USER_END pid=5179 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.829000 audit[5179]: CRED_DISP pid=5179 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:32.832270 systemd[1]: sshd@17-10.128.0.66:22-147.75.109.163:46802.service: Deactivated successfully. Feb 9 19:27:32.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.66:22-147.75.109.163:46802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:32.834275 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:27:32.834483 systemd-logind[1201]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:27:32.836157 systemd-logind[1201]: Removed session 18. Feb 9 19:27:32.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.66:22-147.75.109.163:46814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:32.872452 systemd[1]: Started sshd@18-10.128.0.66:22-147.75.109.163:46814.service. Feb 9 19:27:33.158000 audit[5190]: USER_ACCT pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:33.160233 sshd[5190]: Accepted publickey for core from 147.75.109.163 port 46814 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:33.160000 audit[5190]: CRED_ACQ pid=5190 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:33.160000 audit[5190]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe70cb630 a2=3 a3=0 items=0 ppid=1 pid=5190 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:33.160000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:33.161250 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:33.168404 systemd[1]: Started session-19.scope. Feb 9 19:27:33.169868 systemd-logind[1201]: New session 19 of user core. Feb 9 19:27:33.177000 audit[5190]: USER_START pid=5190 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:33.180000 audit[5193]: CRED_ACQ pid=5193 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:34.406584 sshd[5190]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:34.408000 audit[5190]: USER_END pid=5190 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:34.409000 audit[5190]: CRED_DISP pid=5190 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:34.412530 systemd-logind[1201]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:27:34.412777 systemd[1]: sshd@18-10.128.0.66:22-147.75.109.163:46814.service: Deactivated successfully. Feb 9 19:27:34.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.66:22-147.75.109.163:46814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:34.415315 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:27:34.416414 systemd-logind[1201]: Removed session 19. Feb 9 19:27:34.450707 systemd[1]: Started sshd@19-10.128.0.66:22-147.75.109.163:50062.service. Feb 9 19:27:34.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.66:22-147.75.109.163:50062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:34.545000 audit[5231]: NETFILTER_CFG table=filter:137 family=2 entries=18 op=nft_register_rule pid=5231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:34.545000 audit[5231]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffcdbc11520 a2=0 a3=7ffcdbc1150c items=0 ppid=2426 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:34.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:34.565000 audit[5231]: NETFILTER_CFG table=nat:138 family=2 entries=94 op=nft_register_rule pid=5231 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:34.565000 audit[5231]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffcdbc11520 a2=0 a3=7ffcdbc1150c items=0 ppid=2426 pid=5231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:34.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:34.704000 audit[5257]: NETFILTER_CFG table=filter:139 family=2 entries=30 op=nft_register_rule pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:34.704000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffd5f005fd0 a2=0 a3=7ffd5f005fbc items=0 ppid=2426 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:34.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:34.708000 audit[5257]: NETFILTER_CFG table=nat:140 family=2 entries=94 op=nft_register_rule pid=5257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:34.708000 audit[5257]: SYSCALL arch=c000003e syscall=46 success=yes exit=30372 a0=3 a1=7ffd5f005fd0 a2=0 a3=7ffd5f005fbc items=0 ppid=2426 pid=5257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:34.708000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:34.762000 audit[5219]: USER_ACCT pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:34.763996 sshd[5219]: Accepted publickey for core from 147.75.109.163 port 50062 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:34.764000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:34.765000 audit[5219]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdba3bfe20 a2=3 a3=0 items=0 ppid=1 pid=5219 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:34.765000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:34.766626 sshd[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:34.781908 systemd[1]: Started session-20.scope. Feb 9 19:27:34.783031 systemd-logind[1201]: New session 20 of user core. Feb 9 19:27:34.792000 audit[5219]: USER_START pid=5219 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:34.796000 audit[5259]: CRED_ACQ pid=5259 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:35.310477 sshd[5219]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:35.312000 audit[5219]: USER_END pid=5219 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:35.312000 audit[5219]: CRED_DISP pid=5219 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:35.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.66:22-147.75.109.163:50062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:35.314669 systemd[1]: sshd@19-10.128.0.66:22-147.75.109.163:50062.service: Deactivated successfully. Feb 9 19:27:35.316199 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:27:35.320393 systemd-logind[1201]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:27:35.322966 systemd-logind[1201]: Removed session 20. Feb 9 19:27:35.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.66:22-147.75.109.163:50078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:35.356023 systemd[1]: Started sshd@20-10.128.0.66:22-147.75.109.163:50078.service. Feb 9 19:27:35.657000 audit[5269]: USER_ACCT pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:35.657849 sshd[5269]: Accepted publickey for core from 147.75.109.163 port 50078 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:35.659000 audit[5269]: CRED_ACQ pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:35.659000 audit[5269]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa911e820 a2=3 a3=0 items=0 ppid=1 pid=5269 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:35.659000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:35.660590 sshd[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:35.667874 systemd[1]: Started session-21.scope. Feb 9 19:27:35.668713 systemd-logind[1201]: New session 21 of user core. Feb 9 19:27:35.677000 audit[5269]: USER_START pid=5269 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:35.680000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:35.943769 sshd[5269]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:35.945000 audit[5269]: USER_END pid=5269 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:35.945000 audit[5269]: CRED_DISP pid=5269 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:35.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.66:22-147.75.109.163:50078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:35.948331 systemd[1]: sshd@20-10.128.0.66:22-147.75.109.163:50078.service: Deactivated successfully. Feb 9 19:27:35.950480 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:27:35.950538 systemd-logind[1201]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:27:35.952408 systemd-logind[1201]: Removed session 21. Feb 9 19:27:40.963338 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 19:27:40.963544 kernel: audit: type=1325 audit(1707506860.956:444): table=filter:141 family=2 entries=18 op=nft_register_rule pid=5308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:40.956000 audit[5308]: NETFILTER_CFG table=filter:141 family=2 entries=18 op=nft_register_rule pid=5308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:40.956000 audit[5308]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd6a190af0 a2=0 a3=7ffd6a190adc items=0 ppid=2426 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:41.012966 kernel: audit: type=1300 audit(1707506860.956:444): arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd6a190af0 a2=0 a3=7ffd6a190adc items=0 ppid=2426 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:41.029455 kernel: audit: type=1327 audit(1707506860.956:444): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:40.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:41.019838 systemd[1]: Started sshd@21-10.128.0.66:22-147.75.109.163:50092.service. Feb 9 19:27:41.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.66:22-147.75.109.163:50092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:41.063477 kernel: audit: type=1130 audit(1707506861.018:445): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.66:22-147.75.109.163:50092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:41.063714 kernel: audit: type=1325 audit(1707506861.042:446): table=nat:142 family=2 entries=178 op=nft_register_chain pid=5308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:41.042000 audit[5308]: NETFILTER_CFG table=nat:142 family=2 entries=178 op=nft_register_chain pid=5308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:27:41.083998 kernel: audit: type=1300 audit(1707506861.042:446): arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffd6a190af0 a2=0 a3=7ffd6a190adc items=0 ppid=2426 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:41.042000 audit[5308]: SYSCALL arch=c000003e syscall=46 success=yes exit=72324 a0=3 a1=7ffd6a190af0 a2=0 a3=7ffd6a190adc items=0 ppid=2426 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:41.123783 kernel: audit: type=1327 audit(1707506861.042:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:41.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:27:41.382000 audit[5309]: USER_ACCT pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:41.386237 sshd[5309]: Accepted publickey for core from 147.75.109.163 port 50092 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:41.414336 kernel: audit: type=1101 audit(1707506861.382:447): pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:41.415051 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:41.423395 systemd-logind[1201]: New session 22 of user core. Feb 9 19:27:41.413000 audit[5309]: CRED_ACQ pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:41.431642 systemd[1]: Started session-22.scope. Feb 9 19:27:41.450331 kernel: audit: type=1103 audit(1707506861.413:448): pid=5309 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:41.413000 audit[5309]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2634e3a0 a2=3 a3=0 items=0 ppid=1 pid=5309 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:41.475568 kernel: audit: type=1006 audit(1707506861.413:449): pid=5309 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Feb 9 19:27:41.413000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:41.439000 audit[5309]: USER_START pid=5309 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:41.455000 audit[5312]: CRED_ACQ pid=5312 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:41.805525 sshd[5309]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:41.806000 audit[5309]: USER_END pid=5309 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:41.806000 audit[5309]: CRED_DISP pid=5309 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:41.811272 systemd[1]: sshd@21-10.128.0.66:22-147.75.109.163:50092.service: Deactivated successfully. Feb 9 19:27:41.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.66:22-147.75.109.163:50092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:41.812682 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:27:41.814146 systemd-logind[1201]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:27:41.818152 systemd-logind[1201]: Removed session 22. Feb 9 19:27:45.635735 systemd[1]: run-containerd-runc-k8s.io-70f0f32a7febb78233af606056990407f65e20b485ef0338d6bc7630fdd27813-runc.zuX73f.mount: Deactivated successfully. Feb 9 19:27:46.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.66:22-147.75.109.163:36486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:46.849147 systemd[1]: Started sshd@22-10.128.0.66:22-147.75.109.163:36486.service. Feb 9 19:27:46.855435 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 19:27:46.855546 kernel: audit: type=1130 audit(1707506866.848:455): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.66:22-147.75.109.163:36486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:47.146000 audit[5342]: USER_ACCT pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.177708 sshd[5342]: Accepted publickey for core from 147.75.109.163 port 36486 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:47.178443 kernel: audit: type=1101 audit(1707506867.146:456): pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.178881 sshd[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:47.188956 systemd[1]: Started session-23.scope. Feb 9 19:27:47.192047 systemd-logind[1201]: New session 23 of user core. Feb 9 19:27:47.176000 audit[5342]: CRED_ACQ pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.235490 kernel: audit: type=1103 audit(1707506867.176:457): pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.235628 kernel: audit: type=1006 audit(1707506867.176:458): pid=5342 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 9 19:27:47.176000 audit[5342]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4d57b210 a2=3 a3=0 items=0 ppid=1 pid=5342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:47.248494 kernel: audit: type=1300 audit(1707506867.176:458): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd4d57b210 a2=3 a3=0 items=0 ppid=1 pid=5342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:47.176000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:47.277338 kernel: audit: type=1327 audit(1707506867.176:458): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:47.210000 audit[5342]: USER_START pid=5342 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.319398 kernel: audit: type=1105 audit(1707506867.210:459): pid=5342 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.319570 kernel: audit: type=1103 audit(1707506867.215:460): pid=5345 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.215000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.510569 sshd[5342]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:47.511000 audit[5342]: USER_END pid=5342 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.534887 systemd[1]: sshd@22-10.128.0.66:22-147.75.109.163:36486.service: Deactivated successfully. Feb 9 19:27:47.537314 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:27:47.539321 systemd-logind[1201]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:27:47.540825 systemd-logind[1201]: Removed session 23. Feb 9 19:27:47.547403 kernel: audit: type=1106 audit(1707506867.511:461): pid=5342 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.511000 audit[5342]: CRED_DISP pid=5342 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:47.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.66:22-147.75.109.163:36486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:47.573330 kernel: audit: type=1104 audit(1707506867.511:462): pid=5342 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:50.516863 systemd[1]: run-containerd-runc-k8s.io-254da3db45a268d5dd460815b924fe873b62b37f9da777b98b5127ae531416ea-runc.ev4DFL.mount: Deactivated successfully. Feb 9 19:27:50.537624 systemd[1]: run-containerd-runc-k8s.io-ca306ec4862351cd0c0ba4a764a7b15afd94ce553b02da727af3dbf3932ff2da-runc.kVg6bw.mount: Deactivated successfully. Feb 9 19:27:52.585398 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:27:52.585566 kernel: audit: type=1130 audit(1707506872.553:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.66:22-147.75.109.163:36502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:52.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.66:22-147.75.109.163:36502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:52.554146 systemd[1]: Started sshd@23-10.128.0.66:22-147.75.109.163:36502.service. Feb 9 19:27:52.844000 audit[5402]: USER_ACCT pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:52.876266 sshd[5402]: Accepted publickey for core from 147.75.109.163 port 36502 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:52.876922 kernel: audit: type=1101 audit(1707506872.844:465): pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:52.876000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:52.878631 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:52.887209 systemd[1]: Started session-24.scope. Feb 9 19:27:52.889140 systemd-logind[1201]: New session 24 of user core. Feb 9 19:27:52.905245 kernel: audit: type=1103 audit(1707506872.876:466): pid=5402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:52.907841 kernel: audit: type=1006 audit(1707506872.876:467): pid=5402 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 9 19:27:52.876000 audit[5402]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5600ed50 a2=3 a3=0 items=0 ppid=1 pid=5402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:52.923324 kernel: audit: type=1300 audit(1707506872.876:467): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5600ed50 a2=3 a3=0 items=0 ppid=1 pid=5402 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:52.876000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:52.960236 kernel: audit: type=1327 audit(1707506872.876:467): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:52.960378 kernel: audit: type=1105 audit(1707506872.904:468): pid=5402 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:52.904000 audit[5402]: USER_START pid=5402 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:52.905000 audit[5405]: CRED_ACQ pid=5405 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:53.017174 kernel: audit: type=1103 audit(1707506872.905:469): pid=5405 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:53.180894 sshd[5402]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:53.181000 audit[5402]: USER_END pid=5402 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:53.186176 systemd-logind[1201]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:27:53.188488 systemd[1]: sshd@23-10.128.0.66:22-147.75.109.163:36502.service: Deactivated successfully. Feb 9 19:27:53.189848 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:27:53.191759 systemd-logind[1201]: Removed session 24. Feb 9 19:27:53.218535 kernel: audit: type=1106 audit(1707506873.181:470): pid=5402 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:53.218667 kernel: audit: type=1104 audit(1707506873.181:471): pid=5402 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:53.181000 audit[5402]: CRED_DISP pid=5402 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:53.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.66:22-147.75.109.163:36502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:53.705957 systemd[1]: run-containerd-runc-k8s.io-510b05576ebee6d96d8bdf2244d4f3074861e04e769fd094f51354ca5fa14d1a-runc.oeKcTB.mount: Deactivated successfully. Feb 9 19:27:58.225852 systemd[1]: Started sshd@24-10.128.0.66:22-147.75.109.163:46064.service. Feb 9 19:27:58.256779 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:27:58.256831 kernel: audit: type=1130 audit(1707506878.224:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.66:22-147.75.109.163:46064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:58.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.66:22-147.75.109.163:46064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:27:58.515000 audit[5444]: USER_ACCT pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.547503 kernel: audit: type=1101 audit(1707506878.515:474): pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.547612 sshd[5444]: Accepted publickey for core from 147.75.109.163 port 46064 ssh2: RSA SHA256:2enIA9a+Ie+oz8jW4x9GsRBGLqIoWe8fFi/jhwNVhOs Feb 9 19:27:58.547890 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:58.545000 audit[5444]: CRED_ACQ pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.556754 systemd[1]: Started session-25.scope. Feb 9 19:27:58.558659 systemd-logind[1201]: New session 25 of user core. Feb 9 19:27:58.574325 kernel: audit: type=1103 audit(1707506878.545:475): pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.596519 kernel: audit: type=1006 audit(1707506878.546:476): pid=5444 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 19:27:58.596628 kernel: audit: type=1300 audit(1707506878.546:476): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffcc0c8980 a2=3 a3=0 items=0 ppid=1 pid=5444 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:58.546000 audit[5444]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffcc0c8980 a2=3 a3=0 items=0 ppid=1 pid=5444 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:27:58.622546 kernel: audit: type=1327 audit(1707506878.546:476): proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:58.546000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:27:58.563000 audit[5444]: USER_START pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.632332 kernel: audit: type=1105 audit(1707506878.563:477): pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.568000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.688945 kernel: audit: type=1103 audit(1707506878.568:478): pid=5447 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.815417 sshd[5444]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:58.816000 audit[5444]: USER_END pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.816000 audit[5444]: CRED_DISP pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.852709 systemd[1]: sshd@24-10.128.0.66:22-147.75.109.163:46064.service: Deactivated successfully. Feb 9 19:27:58.854964 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:27:58.865807 systemd-logind[1201]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:27:58.867567 systemd-logind[1201]: Removed session 25. Feb 9 19:27:58.875346 kernel: audit: type=1106 audit(1707506878.816:479): pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.875530 kernel: audit: type=1104 audit(1707506878.816:480): pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 9 19:27:58.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.66:22-147.75.109.163:46064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:28:01.733087 update_engine[1202]: I0209 19:28:01.733024 1202 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 19:28:01.733087 update_engine[1202]: I0209 19:28:01.733084 1202 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 19:28:01.734205 update_engine[1202]: I0209 19:28:01.734167 1202 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 19:28:01.734829 update_engine[1202]: I0209 19:28:01.734789 1202 omaha_request_params.cc:62] Current group set to lts Feb 9 19:28:01.735210 update_engine[1202]: I0209 19:28:01.734997 1202 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 19:28:01.735210 update_engine[1202]: I0209 19:28:01.735013 1202 update_attempter.cc:643] Scheduling an action processor start. Feb 9 19:28:01.735210 update_engine[1202]: I0209 19:28:01.735036 1202 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:28:01.735210 update_engine[1202]: I0209 19:28:01.735074 1202 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 19:28:01.735210 update_engine[1202]: I0209 19:28:01.735155 1202 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:28:01.735210 update_engine[1202]: I0209 19:28:01.735164 1202 omaha_request_action.cc:271] Request: Feb 9 19:28:01.735210 update_engine[1202]: Feb 9 19:28:01.735210 update_engine[1202]: Feb 9 19:28:01.735210 update_engine[1202]: Feb 9 19:28:01.735210 update_engine[1202]: Feb 9 19:28:01.735210 update_engine[1202]: Feb 9 19:28:01.735210 update_engine[1202]: Feb 9 19:28:01.735210 update_engine[1202]: Feb 9 19:28:01.735210 update_engine[1202]: Feb 9 19:28:01.735210 update_engine[1202]: I0209 19:28:01.735172 1202 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:28:01.736377 locksmithd[1265]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 19:28:01.736904 update_engine[1202]: I0209 19:28:01.736869 1202 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:28:01.737153 update_engine[1202]: I0209 19:28:01.737118 1202 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:28:01.783344 update_engine[1202]: E0209 19:28:01.783257 1202 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:28:01.783553 update_engine[1202]: I0209 19:28:01.783468 1202 libcurl_http_fetcher.cc:283] No HTTP response, retry 1